var/home/core/zuul-output/0000755000175000017500000000000015134013616014525 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134023207015466 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000233641515134023053020260 0ustar corecore+&pikubelet.lognc9r~DYvA6ZF,-K$l"mklkcQӖHSd'Z]ߋI_翪|mvşo#oVݏKf+ovpZjC4%_̿f\ϘקjzuQ6/㴻|]=ry+/vWŊ7 .=*EbqZnx.h{nۯSa ׋D*%(Ϗ_϶ݬvGR)$DD D~m{]iX\|U. $ॄKЗ/83Jp ώI8&xėv=E|;FmZl8T*v (6pk**+ Le*gUWi [ӊg*|FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|g\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtWG uIo1]ߔr TGGJ\ C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>/1:N3cl.:f 3 JJ5Z|&הԟ,Tصp&NI%`t3Vi=Ob㸵2*3d*mQ%"h+ "f "D(~~moH|E3*46$Ag4aX)Ǜƾ9U Ӆ^};ڲ7J9@ kV%g>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SԠؠZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#z% ߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YNfAMpIrv̡}XI{B%ZԎuHvhdO0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPFn'&&$"THl#d0 %L+`8zOҚƞ`wF~;~pkѽ)'cL@i]<ք6ym®Yi&s`dyMX](^!#h k:U7Uv7쿻чd)wB5v-)s蓍\>S[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTv)vtT̅Rhɇ ќuގ¢6}#LpFD58LQ LvqZDOF_[2ah?lm$K/$s_. WM]̍"W%`lO2-"ew@E=0D"\KjPQ>Y{Ÿ>14`SČ.HPdp12 (7 _:+$ߗv{wzM$VbήdsOw<}#b[E7imH'Y`;5{$ь'gISzp; AQvDIyHc<槔w w?38v?Lsb s "NDr3\{J KP/ߢ/emPW֦?>Y5p&nr0:9%Ws$Wc0FS=>Qp:!DE5^9-0 R2ڲ]ew۵jI\'iħ1 {\FPG"$$ {+!˨?EP' =@~edF \r!٤ã_e=P1W3c +A)9V ]rVmeK\4? 8'*MTox6[qn2XwK\^-ޖA2U]E_Dm5^"d*MQǜq؈f+C/tfRxeKboc5Iv{K TV}uuyk s" &ﱏҞO/ont~]5\ʅSHwӍq6Ung'!! e#@\YV,4&`-6 E=߶EYE=P?~݆]Ōvton5 lvǫV*k*5]^RFlj]R#Uz |wmTeM kuu8@8/X[1fiMiT+9[ŗ6 BN=rR60#tE#u2k *+e7[YU6Msj$wբh+8kMZY9X\u7Kp:׽ ^҃5M>!6~ö9M( Pnuݮ)`Q6eMӁKzFZf;5IW1i[xU 0FPM]gl}>6sUDO5f p6mD[%ZZvm̓'!n&.TU n$%rIwP(fwnv :Nb=X~ax`;Vw}wvRS1q!z989ep 5w%ZU.]5`s=r&v2FaUM 6/"IiBSpp3n_9>Byݝ0_5bZ8ւ 6{Sf觋-V=Oߖm!6jm3Kx6BDhvzZn8hSlz z6^Q1* _> 8A@>!a:dC<mWu[7-D[9)/*˸PP!j-7BtK|VXnT&eZc~=31mס̈'K^r,W˲vtv|,SԽ[qɑ)6&vד4G&%JLi[? 1A ۥ͟յt9 ",@9 P==s 0py(nWDwpɡ`i?E1Q!:5*6@q\\YWTk sspww0SZ2, uvao=\Sl Uݚu@$Pup՗з҃TXskwqRtYڢLhw KO5C\-&-qQ4Mv8pS俺kCߤ`ZnTV*P,rq<-mOK[[ߢm۽ȑt^, tJbظ&Pg%㢒\QS܁vn` *3UP0Sp8:>m(Zx ,c|!0=0{ P*27ެT|A_mnZ7sDbyT'77J6:ѩ> EKud^5+mn(fnc.^xt4gD638L"!}LpInTeD_1ZrbkI%8zPU:LNTPlI&N:o&2BVb+uxZ`v?7"I8hp A&?a(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òƓVgzEY9[Nj_vZ :jJ2^b_ F w#X6Sho禮<u8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfՋ40oW&&ף \9ys8;ӷL:@۬˨vvn/sc}2N1DDa(kx.L(f"-Da +iP^]OrwY~fwA#ٔ!:*땽Zp!{g4څZtu\1!ѨW(7qZcpL)ύ-G~^rFD+"?_h)yh=x>5ܙQ~O_e琇HBzI7*-Oi* VšPȰһ8hBőa^mX%SHR Fp)$J7A3&ojp/68uK͌iΙINmq&} O L-\ n4f/uc:7k]4p8wWLeUc.)#/udoz$} _3V6UݎvxyRC%ƚq5Щ/ۅw* CVo-1딆~ZYfJ"ou1ϵ5E bQ2mOΏ+w_eaxxOq:ym\q!<'J[FJ,4N:=6. +;$v6"I7%#CLTLyi{+ɠ^^fRa6ܮIN ޖ:DMz'rx#~w7U6=S0+ň+[Miw(W6 ]6ȧyԋ4ԙ./_A9B_-Z\PM `iĸ&^Ut (6{\٢K 5XGU/m >6JXa5FA@ q}4BooRe&#c5t'B6Ni/~?aX9QR5'%9hb,dsPn2Y??N M<0YaXJ)?ѧ| ;&kEYhjo?BOy)O˧?GϧmI C6HJ{jc kkA ~u?u7<?gd iAe1YB siҷ,vm}S|z(N%Wг5=08`S*՟݃*־%NǸ*kb05 V8[l?W]^@G:{N-i bɵFWǙ*+Ss*iނL8GwMm[eG`̵E$uLrk-$_{$# $B*hN/ٟCZ]DaUS@''mhSt6"+ҶT M6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3ӆv%#oV}N~ˊc,_,=COU C],Ϣa!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_f&OC"msht: rF<SYi&It1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]y#z18m@n1YHR=53hHT( Q(e@-#!'^AK$wTg1!H$|HBTf̋ Y@Mwq[Fī h[W,Ê=j8&d ԋU.I{7O=%iG|xqBչ̋@1+^.r%V12, _&/j"2@+ wm 4\xNtˆ;1ditQyc,m+-!sFɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+"/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,ŏd7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo_nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_MqKJ> QA^"nYG0_8`N 7{Puٽ/}3ymGqF8RŔ.MMWrO»HzC7ݴLLƓxxi2mW4*@`tF)Ċ+@@twml"Ms>\΋"?|NKfֱn !s׭dֲcUh=Ɩ9b&2} -/f;M.~dhÓ5¨LIa6PnzɗBQiG'CXt!*<0U-(qc;}*CiKe@p&Em&x!i6ٱ˭K& FCfJ9%ٕQ·BD-]R1#]TROr}S [;Zcq6xMY 6seAU9c>Xf~TTX)QӅtӚe~=WtX-sJb?U'3X7J4l+Cj%LPFxŰAVG Y%.9Vnd8? ǫjU3k%E)OD:"Ϳ%E)=}l/'O"Q_4ILAٍKK7'lWQVm0c:%UEhZ].1lcazn2ͦ_DQP/2 re%_bR~r9_7*vrv |S.Z!rV%¢EN$i^B^rX؆ z1ǡXtiK`uk&LO./!Z&p:ˏ!_B{{s1>"=b'K=}|+: :8au"N@#=Ugzy]sTv||Aec Xi.gL'—Ʃb4AUqػ< &}BIrwZ\"t%>6ES5oaPqobb,v 2w s1,jX4W->L!NUy*Gݓ KmmlTbc[O`uxOp  |T!|ik3cL_ AvG i\fs$<;uI\XAV{ˍlJsŅjЙNhwfG8>Vڇg18 O3E*dt:|X`Z)|z&V*"9U_R=Wd<)tc(߯)Y]g5>.1C( .K3g&_P9&`|8|Ldl?6o AMҪ1EzyNAtRuxyn\]q_ߍ&zk.)Eu{_rjuWݚ;*6mMq!R{QWR=oVbmyanUn.Uqsy.?W8 r[zW*8nؿ[;vmcoW]"U;gm>?Z֒Z6`!2XY]-Zcp˿˘ɲ}MV<в~!?YXV+lx)RRfb-I7p)3XɯEr^,bfbKJ'@hX><[@ ,&,]$*բk-Yv5 '1T9!(*t 0'b@񲱥-kc6VnR0h& 0Z|ђ8 CGV[4xIIWN?Yt>lf@ Vi`D~ڇŁQLLkY <ZPKoma_u` !>Z;3F\dEB n+0Z ?&s{ 6(E|<ޭLk1Yn(F!%sx]>CTl9"و5 |ݹր|/#.w0ޒx"khD?O`-9C| &8֨O8VH5uH)28 Ǿ-R9~ +#e;U6]aD6Xzqd5y n';)VKL]O@b OIAG Lmc 2;\d˽$Mu>WmCEQuabAJ;`uy-u.M>9VsWٔo RS`S#m8k;(WAXq 8@+S@+' 8U˜z+ZU;=eTtX->9U-q .AV/|\ǔ%&$]1YINJ2]:a0OWvI.O6xMY0/M$ *s5x{gsəL3{$)ՆbG(}1wt!wVf;I&Xi43غgR 6 ݩJ$)}Ta@ nS*X#r#v6*;WJ-_@q.+?DK១btMp1 1Gȩ f,M`,Lr6E} m"8_SK$_#O;V 7=xLOu-ȹ2NKLjp*: 'SasyrFrcC0 ѱ LKV:U} -:U8t[=EAV$=i[mhm"roe5jqf$i>;V0eOޞ4ccc2J1TN.7q;"sդSP) 0v3-)-ٕAg"pZ: "ka+n!e߮lɹL V3Os\ဝ+A= 2䣔AzG\ ` \vc"Kj61O Px"3Pc /' PW*3GX liWv-6W&)cX |]O;C%8@*Z1%8Gk@5^NtY"Fbi8D'+_1&1 7U^k6v읨gQ`LRx+I&s5Www` q:cdʰ H`X;"}B=-/M~C>''1R[sdJm RD3Q{)bJatdq>*Ct/GǍ-`2:u)"\**dPdvc& HwMlF@a5`+F>ΰ-q>0*s%Q)L>$ćYV\dsEGز/:ٕycZtO 2ze31cDB/eWy!A/V4cbpWaPBIpqS<(lȣ'3K?e Z?ڠ8VSZM}pnqL f2D?mzq*a[~;DY〩b𻾋-]f8dBմVs6傊zF"daeY(R+q%sor|.v\sfa:TX%;3Xl= \k>kqBbB;t@/Cԍ)Ga[ r=nl-w/38ѮI*/=2!j\FW+[3=`BZWX Zd>t*Uǖ\*Fu6Y3[yBPj|LcwaIuR;uݷ㺾|47ߍeys=.EinE% 1zY\+͕߬VͭW_겼cazyU1wOw)Ǽn@6 |lk'Z|VZpsqL5 څB}>u)^v~,󿴝} 3+m𢛲Pz_Sp2auQAP*tLnIXA6L7 8UgKdT)*7>p{Pgi-b)>U6IXabPde Ӽ8Ģ8GɄnb'G ֤Mcv4?>HC78NE@UMc8>`TvZ:}O K?m$+/D66q06N l6%> ٞU,;[V11c%D]]4qwD e\yɖWd զ(eJ/=if~_ՠbqbU!JƕȲ(JJ7B(LBpBY$ZGQ2hʪ7gIr< '}UU8)x{o-Ń2WXFM&pHjWupyp]KHxɠd9Ⱦ?<⫟*!r_?/e0], 陎4cZTHB/qʫe?"Os$W s锔ay񔔭锔?:u~dri|o^_>w> = uG[ |( =_4mpm wGhښ5S,,y" &,)^?)(V9ye^I\@aMBu1~Te:9rz=a.Yͫj֮%$X*Wl$^铥&ݐR]'Zؔ_BT~Yw;̂lUQ=%?$A2͗ &PB>Kwޅ?ZԴ .(G?A/()s–Wr:9n~̓Ww?<)V< jY£OOglX767\׵ '#yߓǔ]2<8AF7XdЈ J<]t*T_s=wmK_sIx>GMS{dx2ϳ1o|noeg/oMz;C劧E"I=kg']wG"~CYѐ;# {ɯߜ q>tP83SpJ\}|Zox5CS,:i:>k0u$]q {w(ufP$b0~q6v j.țR,z}|FcC<_է=t$Mx&{w&k݅h<~=z&ES)v\ٻZo4 - >8ߒ08iϱ݇3e^^ yI?\6&wc-N::;9ioy)O$}af%! )޿=楬v4P6F5F܂4yWfk|꒷oxvtvYN!g''l( '#^ti4!=dxtG&..\|.q6HZ@Th0M&h =禩vByYoR*LBOaiel{  ƎW$NЉTN1߃ i2XAē(h6 f0_!%_}[MixL26 ><e.f`fԝJ8t:CCWCj5ƮG Zl4n>#G*% iHSОcْÃeDƶ"Z*I9ې4-ڗ80MŶI  4HUQ˛P[0elCdjZNkaFL[ߒX㨬٪Łm?ωmtVFaJJ/ |:gZpWyyswyq+- "v0F5Jwm)}"?r(fO{ N%~cCe5 WETB=uhpJu4/x\ѾZWSP᜸ªyѶ8q UA *)[IF7q m%B@ _Ռ?PkƮKjUW6uF8Rb N)81S9Sڮ. E'rzx᫇r>'yuBDZ>bښv%願(76EkZR[*;V hPO$X+qH>|_Mۣ4խiCq`?\ҚǶbWv^mHƸE5(;1#_/Yl),p]z4 ~ݵ71@iSwhC*xYI<751$EJц=ܘL|ɫM޽iy\oM$HSEQ4]wK*>CIDg[۠tK&ץliTm7Ӣf=UXSɩ<ɲ-+zz!]OWѭG{Vtnɫ2e噐в^u} cVʾU+V֫ _M=@`r"Oc1}U74Q&IHj*xMNntUwFA$4ֱ7?_ 4*We]h`'A,oOwqc,O3wpSar.iוB0g s] C"oLɻVLʑ)T5L׷L2]O_b!SYTyM[?_nZͱmjN)*˾j3">=뫪HWrjrEw{v!"OO*=VЯW"ǖuZH^aMgR(Q0ZtUqHQtwXϚkݏp-|~ncbaf yL`㺏KH #Ax# q7L2f7`\ `ͅda~p7o g$vA~[L@&G ^Q@k( Gw:3n>j  =}ЌH ;c#9 Hg ed8X nap?8cΘm)I0 4(3G1tրi@@2hDhH%޷@H8zqc sa3]zS%ixheU1"<;ڭZh}*j"m{&hjWmDї(c] s۶+^= zΣƉ;d<Il$Rx={JeAn4HppNo#\=+gZ 'PgoaCgE|"p5lHT"x~B&=x35qќS׵~a?E ]=Sw" @p)Dk3>7ک/ҸJt1C_d U>I\j?{@3-#p~&mx-eSiϺL[y`k. 9T[퇷{~ɇS~~`(*uix.ɕ@(C3XCo`3=yL< t k< iZrSu>8s^H |2JЦb ؞T l;F}p|qcyh M\;@ޟ Z qk~YWvx1>8̹c6ˀFp +ݜgGvAs?<.qD4A9CHo :…</ ͛Cy&bDek l @rn [ w q ЩpEn? p4w(n[}Rg ^?X Pڍ@b(f@^G"k;aĈ0 B`~b//^Õ2wjHMbER w`[uڬC6dpq>B1>a>fGM͖5;ЮLE0zT)eeser/}` ? n3 !6l ,se2fBygw#\Z=dect{ xkӒ)[ck%&^y ;{[q#0P df/{ae u$Jvlk&PgJWCq7UP\{.1i\fplf!B ;8{tK+k.#"$"]!3{(5E<nOVC\hLY~KvCfq^qUN5!촇*p%o~"I̲50jٔFg%N'e:9.7╞NT))"*2Ճdz\m8;{P ^Ryf_gyDlmg.cc-9x[fq:%S &\ЏnZ‰ FUӳssڻ Sޜ'1ʃs%ddg#1krO~9rd3[¬TY#P y2\Ѻ 揘4 w]oC}P +vY֯?YIA dFp@m6L!pG(uZdR^{Vl\5m: G; YhpUCrEI%)v 1ik9֘600.~gTZf$&I: $̶KqR`Ϫoy65{<}KkEzzrvv tTi}OY~?`q?c^ jxƮZehѾm_&AU+#:&<,l`tوtA>قX/_N)J\'~/&ӼOƠR3Og㚌`'""Y^j<9M=׹{5z5m6|k<\Kx~R6ZݺSCpm*pVAK7sp-tBG[d%`'fy=ѫ8uV0FL898>%+Ҳ:|/@~H^^ߏ/ojX&^:KlFwYῐeӲs織k/ooq|&9:,et+axV`6{߽~uq\^cQzO2nSt |A2<3n- ,b 0q1F!uǏ$Kȩ(#o\/ d< lbf/<'ztL M6(IGAj^ex2&W#Y<~wМ`xi`Z][ˢāI9Iātg򱜕v.WuJ.ѪN 7`[JDy]eB>Vj!=1ٳO+R`gEAR:@WЁRӀe |!yִ$Jw N(}]ӥt>T^VS؝z1]Ծ}tһlZlwZHe@N:O#y$넺;N4BG@NzO#{$;N4BG@hNhO#4x$넆;Nh4BG@(_'@(ߝP4B# Kl"% ( "|1E= \8;/s8Sq.+40\r1'E!ܓEH-1/4&ٕ0eVI4}-S&=-ʹ~*."?g}A6k.&'yUޟ9y̪ȻX|[ĥျ4(یژoW8 %spy ݸp~qŤW==̥gy>V\<^(Ϊ2Cd<}?8'~YU^e'M4[nѯ>Z}t9XNC<\",KOi`WhȾ| \ 6a|^4%s2/H>Awk[8³I5'z'0Q J-5mNQ6[Qs[$WSK5)iKp>*[w}s`௲q@L,Sʊɇ6Ϡo=u1.$~Px(5+S̷fXL&wlm1۞B{2fU< xQ fy w?qe%,'oò`)ee\_BPE,Fmbi  $y $=(/NdSI緞G~ɴrqVʌc!{Lx2͆= f6L$%CݽAx pR=g&`#vj8AV0(H%lh:RSL~OslhR07"jewYu;Li뜯zu*<\QQ$$)F`7mYt1’Bf"TE p{i!Y졿;,Y-$:>DGfQ}.re=%N#8νZ_~κPF>V"t?"Io̦hyn"#qw@KbRed2ˬ%ou qpz(웳4<<x*ZK e8̵уV(e7Qt)0¡%Ey&]&*!VKY2m8׬äEUhoE[]e%%=kQ_32/p*lѿQ@̨( /i۫C4-.GT5}C-GԑrKKr̹ ymW7$be3jxuU[fq~? F:DEa}Ԏa60Ȧ}W j1YRp@o[ﰸ—RH~dʊ$9/`o&_#<|z'r-1hQ܍k̒fgȖީ%Lo,wo NQ.F0JC;콋["npa3Oz9cӬʳ:qŔDv)9U%3;XO e0y#3p-͛SR2KLIQ~y-5L]`\LPٍCe9?'Kx" -XmR[xȩ Ԕsi,e5Ia!-&!p6\? {M`*'Vf>s[8i@(z2[H1*oɽ|g)+2PVd"qU4)*+j4v"Tre*Meז|kgd_4;J]~ɻ涎$WW#:09̈́/sZ) @ )O♐TY]dSA%Uٝ+]E}.NG~oLݝe3Q7q"U^ǰ/1#j+s~K6Bob8fjVt> >ٕN:<ÐE0}`:eɼE٬ 'a0DaϚ#td>%KH8ö~w\~\ib6.Xҵl-5ik0.m͝yM$8NF(Id7:u0pJAAo&"?`$Ψ\߱# bp:L98Z7޼,Ym7)!#=6eS̱aP`X -{ Z13p}e-h 0..B{g816ZbxWBx|:-sFp$[=MI|ffl(+}{s$8Z1lQm;ܯ$2 N4+]&z&M[ea{&vTG%"#-*6l zlRYeDY`*Zpˀ30I,}U@-Y\./FgGUβϨ0R,3VUB3ed̂-qeB`P,)S5 wUI0SQ"$\8Il7'fg;`R <܁ӽC*^A2e 7}]BYFHzSMI'ͬ||:(]X&D.#\"c\Rtgz 1R -bQ )+"0JSsy%SҺ?,% E>kI,7󪏞qDH㸺{~~$qp @ Yx3ʻy+X҂iQ Qc$ pR(yFHb-Ip4yc_Ї3KbV5}RZr'Ts_H}Ct{S{ipc+8EJ$CEh&xrr oMaR*Rh2PBpcz#|nQN{tOP pv0fݣw2_K>o)!Vl XTᦘog&p"$+i/Y Jăc<%Dx[Fp0HsGZZ=vær52_cb xqM,Fҥs$8Kդ3&9g10Zb$< dS&.h v j,:' F(=gTfڞРh8~"olYEH8Ȼ{3 aaApӺ8&\)B ?P(\3i]\(\< FϪ#$˥}[X)dcߟ⏻jIpy-i65D̹ $XnmarsYÌ⡓䳍<088; 227z UHNYm0z'%k ch58ZIvCAtA2yB2+e5Hp=Ɩ׵63o]_aj XQĎD~68)у 8#Bp.1Ƅ"Ja$]qOQv2f0Nq&,~nI2O14 Hb]<ز'XyG{|E$yRCc$]jd{+[x4!2 ƽ Y1/ƕu17Vd 3ZbiLb+Yjy % 6DplEo}740pWvHݷu5|O鐮z}r_Kv$_Y. kJigͲQc RGd#+ReIcг8f)rZa\[vT1e EL#qp'3HK:jࡢkС׮蘚ĊE sQYQL4q% n|V:Zg0ޕࢎ-X 5 "_G]$S[|xZ^}c8%H.H\K>)lcN stt)of;$*E=t|p鳺jE iuY׌Pac7xe0l|:Bg;nݬ\).+Wϴ\X}H׍ЉVlpK N$)&H\ge=a|-D7ٴs 5~%d&v%Q^t킶&rrH#8-r$AYQO^T9q7~~Mq:GQvq}BQbx9EPhOr zCdPo+WQD7N3Nw!,iy5L8lTBH\Ի3LஆkyOE< 6v홭蝐W|?v5X_X!1R`^DjAi0.Nsj2 rɵ} Oa^;D+mmO?>y 9Mxx1sNfi;jf.ÉXCD͠FY]5I}&9VRYEg>Ű@kt"fe:BtL$Bsb ׄE{A/7fįZ ʞ쒷Hְ)$FyxJ b@Ǹ1)}J:ir8x{&9B oKxvp{j l0?vHP~V Ío'׮Ɯ#՘tδހy]ǩi>,،>#"r'7ޙWC?G.+$e.cѩ/ncy }6Y=kXGHNT;^.2B,rVWϬ/%@FADR#U=vV'_ t$D>7Ӫqybop`ZaC!MWĂZh/sF% {'1BYMqD ӳy1xյ-XLEI-'aT|U3©D n0GgEsDڏ`NJ(j"!ƨR0>1zLfS6l`f5VAhR}JFM[iܩkaYjJbc$]$8^tmD$`i2rV W#&^_ʽ!̤@iU{TnHp0&@R2qi?7{/C|[NLYN.%HLpVAj'Faju^ W!s@}*ҰR.-ƴ`Đ7Zk<9|-QYI[=yk^BOy0N2M-!+BF$GŹS`ED1Y|%,^ߠT!%-15}{ =! nN5R+\x+X/'hIx<I~v>9kEDytGI ?;y+}Q~4%hn/MK >pآ!2ZhpèVI~##(.DںdS:H+ Mz1V< ^\Ᏼ^#ۭᵻO+1bX.Bb/ip%޲gIp4u2 e I4c޻' R̜qD%g.ZIptjV{OV!gq bV$@m$sTEَV{|Ūn F%'#Oka&=g9Ҳٽh;òBV1 ~Ou7mئlVY`_ΣNUcapQ6[6\9zx}^p:M讎f4i9ǭq P ɥwiM/oJ`ǖ{a<͠g&7C[2m][~z6ѳI! B|)xYZ2&B@L G_)bJ(}@LIXEAg?fW0̅h\+l<=q#|fiBL;WM 8:)&اK{$EZsC^뢰r5(ӟjAao{0 -,[i9bA|DqIZl[%_8mBbſÇwRQpNWTGHK GG}]Ҝ$JĆ9ZzzYigz@ ivi>Gk78o]7%P|=_Y[/mZ<[7UEeQ07xtb2XL {r:􏎢#!Hz~lʾ` h?D'bz h6?F~`dS{Qu9TkR>]!MKN b@e~^XRh\k܍?7g `^z|hZ.*.j=mv6GtxtU"o{,zigr)~2>ǭȹ j!Eqg ?s#Hu(snj3gQ]-ϜYVq t$QZ&]?JbkWLz}VyrS>bH㺤7]` .i؂ˆ~Rv-¢[ڠqTyMA.PnTUvl&\(";B !g \ mН7!%0Iu#uh'R NQ| mwT!UR .x;4vU,f4dU](Ybw4 '4F]JA{&Z {8M$?U ul1J4#tQ);2ZmX Faל| jiM6f3G1N~/lc)KUM-_Raohkcr٣'8Ʋlz$,l=UB .P^.9+xŲr#Z%@x;qnuw%{WK*tCTR>,^\|˾yXkV.ťX %L(̓R\[) ΙJŽȗnÖ'uϊ7+x't;ˉTXg!g)E,>UOgދ]XU>{"gq/cA`kD$".' 4/hvɚ?%OU);iy}vg3>f5lOX3R:s r ˠVUdeء8mIkm#}/ O <d{1 16ܭqnu:k#TX0KK^Z{iߕR;6=:t^D' 7f7G bC̈́6H\j#_TrlOMkZ'Tb= e1 USQ`ݝ+TDiiF}ZxW|UJպk#bT;|wv5v TZ9kOJ { KC)Uh_W--J >jd`Ul{V=S딂[/@duOizO=6bw_ahAv8jFZHPlsUj%x=f |F>M PUH=^HM?'rKTw-=ff>`#DwStOwX뇳gV%k!C3R&`vQe"Jf0V v0g~ >»O"d׳08n FJ)]ȴhxLy#<8()n~+/EU "9 ,Ci/wd[y3gİ[oȐǍ5C8L&[LTŒOBzCJ@OOnپ "jpAg47Cht}}گ::OUoz1 |wT U:)5 xACXi+Bޟ) MJTb(* [CB1[+L?]ۛ.YuP$)S\x-LG!8)09@S{1;7}CP X8=0[՞:(&='Nof{ʣn#&y=yQ`y%y[Ih2\:]2i \sC4rJr'4w8H@Faƀve.(;4u!?p"8IDo3@%Q2Ta!3@E1TJeN)-cHi;JTcusﲮrwC`1@^x$BZjs`(텓:|6yjEuM2)6iD Z<&'0&\nP}!c…T1&56N1&V,Ms<m,̜pqFyƕ0Em0ASuyN A|0 ɎWrQMpr)gy/&P $Fa.1Y- WMo3+V)+ѹ 幢ZXRbhV z+H ƺJJ%-_N͑3)_bq:.HX!$^〢=b`D %J$(`ǔ 5"S!F`X&JA؝ǘ> H:nk=➚##|i 4*Ċd 2gA+LG"Ez`<Ǔr(/ Pb9gJC%J@D`P\LGZ⠅@%1 ITЗwBrxr2n\r0\1cŢGr.p?{W4+w g#:޻zH,V:!$'ÊN$38cc]] ;FaåКڂeaAi3֚yʒI.I-=rJ P4@ع_oJ0Qs0K8J G M( <(60 3n"B(lNJd9v}Olz52E+Ȥ탃368Ih< ,oڥ*QGl͎c< m8`3oV!yr V V#4\sh6Pۤ Tiպ]HlcM [fmikb% @ZLo`b|aS+pD,qFr!,/]\J|R3HR)-$S4_??$?Ɣy'xgۍom] Ƙo9z "% !22Xi, m KIF$sid LI/2[6run(=R52A(M: )!F!B .5P$b!e =$N8{Kzz $Q4$S&G(! d =a"B㠦+[ƓdW$iU\0*OfLb|;,IO t|@ـ"3v+7 W"SsJKP5r*OԎWɏ?:fYrI*Iɒ %Nx-)mdae"jl&ʖ(|@n<~2s/]\^kgҼ&n۪s+s+ZʴǛ+Y k(8=9ה1Y%a[LPq/\e?5˳d3"~ ti қ_L7c3"gEc*>a- 5ҳbbyW}8?wlgu/KZsYH ÏplCϊZ{f-u|5t0[[s?U{LGl;I<*w>``"j]?68XcTbU_Ќ&#%8}Mϻ+JOp!"b> ") ~zmb~\=8F܆tzfM*Icg8x_9BS'Kn(,{Bz\%6$N?ADv6?KMfJTQa (V}V譗'p1>%"N\{l-)'^NiK&F/_`y&c鱂H*{z)E<=ZjH;=xTޚ&|Zds}3K.?ޞ$+jУBcMy߰=oBǙ27!D~0cʮucNf,S1~a:#ރ;9Z,V ʭ>1s@\1Mݱ\a?$,Fvr2Nf68ű\2 WTؕ{HBvc kI ֿ}auKɓ ~u~G#2D7<ǽzOz\+K7^ |soop[?d،@=4_d|}9Yؖvzn:;A[3ܫ^6pfX^z{ CwWj_H摈;QR suG XeFòGJf_!6⊞pKivgRoFm=EKZFWL"D/*>P\Q/hb)iO8)`+7>\e|MPzW^T"rx޶ˆUV#W S)tʐ+y>!=i8\+j,7|PR 7 ByĘu"sW& oJ2".]iHbH*db?R`p|5r5jqj ;Ջ;B8ƊqEa`Ht17Ůݖfaorom'Qz6|V[C2I< b"` 5eX>Xj>\HyA@ yt[c- kb>I aY񫳎p,C1I!Mo쁒 NA`4`bE!tb(-.`x[]Rk WXمs7'Gm (L0"gQR"?mk>ԑ.$\̇#j8'_ϭY {LJTP _|C3.ÇI4X6 .wz.A@8B,F F2Eֱ@*qĢDXؐ TG+rV_XY͠f*Av|]Jl2S2?b4*wy.C&QbFy^l^Lc;Cpʼg,6^g}y<2.:ڿg6FT2M8O>fu1Z7mܜwzusa7lΜ,Wڢ^ Ū3k,_LR څ¬x<-GY:2o_χE{E`v6[R7g9"sv Z0A-uU!gd?|?!m=lK;FҐ7W'`d/&L>[i-T}1l+&neJD-=7C z!b#C;k@5A<T+Pة`ڏiTA \JCjW StG9џlo( ~KiAnDnOwB;kc2&10QKfO'C0A{`C=lYs6`NJ/Six#1#.]u]n;.]Kut t!vl!;m>N??!axϹm[~fHdي7I<ÉĘn+ÒVbn+J>h S GD r6(*s*E?(XCa P(0%5Qp)B`tX1lf%qnkvk힓֞92]ûf?2n+"3|Ķ#1ƈ`\w?4\ v0S>m}c(T_!|fꇔHhwH@: =YC uI,"c&Mb0EHJd$ N93qDI0DT]Tj G\Lj;vYK \nv^Ws{[{)%}'Qv>}9'nl(r>{g*伆ל-XsŶbaygAd\2*>xS+ G̚\~Ʊ+seyfڜJޣdneaưLsj{ϩ7^^?o=QAS~ū組?MV`mi€Udz {eukl}Ja=@|f| .^ Pw {+x B[ %k­L".n{<{höڛ[+jd ^}Q뽯t YH*FFcGH)Ey$5Un +~ d'5T~pđ"H#EEPv,8P,RXN[.p!ԱWp|ds9&?)3Lpx~_sѠ4JYHR"34ds2b:qnWRvcbh}'j_~Y['*?'m2Z 3 2!M#ȗ\TU.bp"N2p1$hIIYɣNʑ o*GTZ5S;$1ʊ\3{2Eiτ&ist]R|v?e|? isKo"[Q$@7R75cNԌ#_fSh3,P"5pGߛ94R*|RmW% kfr(^#1N0Zz`9Uّ) o0iGT\tsm NR$BXgx shTR ן fIåpU̴ UM0H8jVg"ɃgB.R2ZNyM#ȗVv>ATdTrA킐^ Ë)|u9|j!9Wfw1 ġq"_T{ 3ɚ9Z(DѦ(&MN#jW2ZiQ =g [Kxe3aH`5D %E.nح@/Ud= O5h! n5hnn`fR9;7@zWʆ)iEE#2DNMA XbԁLxS9H8*Wu'wbr(!`0-|Xڑ FhR5vX@$<>u+{v:=ZQgFMX# Q,t i5iGTJV86v&Iz~#2rZM.ỳШpܮH; yQߞ U._U+swRTw[$)h{I3*dpÛB&d CqET岝vO$/I*?ȬHJ;ɜVFHswѦۇڜ$^"UmLRMZ#ET t_c3-Qy S B+-\$.xpgN5|~lz|"= ;w?s#9kI yx~\JNfّ~qKכ v8 U$Y !r,$CC7!M#ȗ\mP]d6=6Yݮf]-k]h11LUͫ%Xg7t 8;FPKݛxl7'쯖2g"J?!@\Lu%h ؚȗֹ ^MQD3|i} IdX"0K/G;2c5lc_O!÷ieL=}oO5 mOv}zG?=]9!9%6"D2:_E}6+){(!"ΎK(*֮ L"%UZK$8<KL\`ѱEF/nC?JsVg{AƲznqUT^ s#'ib(uOZ _hC\ip@Fіbp[l4cq O%*T  2o6CRoc>199=@/mCy?/h/{pZZ_|mb ^7/4Ϯwiaxc<"_rFi7*i܏9pkӔO1/ڍvSL$r䴻3-Z3|R8JGc.a"̇qxȗ*aҧLQݼ *ְz8=ulV:}i\-G5wi=:2u08]w>>u-1hW)ը )xʸt$ԭMO-=}B/.g ?JJ7](qBFRySUJN9kVv6儌*+]784B+y7Q}p q'ͤdz<怵DEd/ ECA%hMC>%dH"S(Jo84vߗ:ԆL'"l! i`&3#Ό1|Jz ӝq ,+3aQ͑LeIƐ/UJFa;MWL)ߨaF$/I*UwSIWbW@[R7HxꥈQ5ku&lZg-ьW$ EB؄4[1,'9:foyD[' 8N a!(%WQ2!cFcVȗ\fQof{wcQ;?ZW kְ is_7 8sCaVLH7L0wMW@-&3:q:?WDd \ cHpђG]gG|hĹq4|j#f$&oĵq+Cg|#;!cY-d;;D+ZC@$9EO?qW?xm|Td|_ͶкzrF^-l;dp̥FYw}}]OQ'l8]㎫*/aU2#xU"_;C.qd[geԦWBvCkWryX vTSR?L;J8K!E2!cGţ4 YT8xmllGF-i#mO"V{)3>4WۊPP](z9:vQKVdjk_KM;3Tn YqIfKP Փn 5O2,<RT>|WD3'M’  jtHۆsW,p|hQȊP#ڦDdҒ aJK $YPEhQKU"$=ӌmT/҄[\ŹWYqԴu= ;^Vǘfx| {vRhiOаC)Cy?-֚4Ƒp_}ǰsnI*j=ftB`m1>p^|ő.n4pqMe u-q] ETau~[ O]aT`;֬u4s?nbsuot1L5K/-)ڳyǴ(LS r o5l1[JO0zɥ{8̘AĔl'dt\5ziv@ uBFخְ&0 x#֯8"Wz{g |nijELw5|)f]ÕbM)?-ԁ1e<( ŚH[WRf?p="_fAȂfBF5F4==]9vk5/6BƀI Š =ЈIg"kbs;}q&d҉WUs=W|crBFω1 eՇR68u&: jZ}bY7_|)RzcLq uFp9b90yqL+B 쏒8$)lFǥWdGR|$㪈hQlVYEy'*~kvw(WaF_ݮ5,^V8444u6{#ܸ!RRn@YNUYp) q\@!KǻT ժ:ζ~^4Цߦ>iْ&-oLL߲q-}#y V iFiw?{D]L)yX_D(L =(!0Jq f>^HCE&dU>=>6[>y+>ח#1Hd0uB`%݁{eUAʑD!hUXYVxANU XmмEr1ڭ=ᵷGPC1!#VL(:|%Z/艍>*2SB_ְ9١٢{xЊ0-{&O;n {e6iZ'Mٹu#?TW?zݼ8n2b0mv*lW1f*F.Iw*>co-TRVgD3WNPTÌ !o0&E&PIlEsuSswz㦍g߂h2xd\2K"0OH%82@_j+`"I[̛:2w B~d-ht&RxS{bFk%0ă8ontKȌC $1C'+Ƹ7YOYtKpKxЂKp 46gEա=d5s*rh#)~!))NL[޶ uI63`?,SZ*yx &LwL*׵w[gg!f=t(s#u{0(#3RLyb)U:y(Jޝ~Ɉ!c(00N%N׼ÉQH10U`gnw b=F\!)r( b֖cPr]ޢhɖu~avxӁ̦a#sC<F!j8s sM Rkb*+9̴yaһmE51'F2C&1veSfsba)xsW,}:)-Nm>FHfucOlsěP1W4G]`FoQreb:YS\m;0 yyp:Utqb`qVe3엓 꾟,BZኼwwb3Y5_YkK Q 6_?"|?0a KO-N}w?O˭@Ի†*) 9X'VkcuvPk=Mm~w•xk8V,WS+hC`a)g5 ̧EhmE-Pu[|Yu7q\Pͽ }BWP f`cFQ +u>Dc:O%uH{# L|*MԠ0ezz,zm*%}*>ͼ#n '}YzhZsHRk< qJ ދT蝘/&S94W7] X>=0V84E6Rp1ʞk*UI#z&jFd4ZI!V/hC˶Ck? 1FQyo>_gw|057RXISb lMDΓ%3 Uqؿ 5;CkCUF1Ԏ o5-aܪZ$(d-ܨi n1rUfb5kAaִ9i)ղh49J4 2FZ˜B:c,:ޥ i>b؈1Ѩm.7_UQWY֟&RKABW2 [9[`w2t:ߡhtQ5IOzj5R^PBpd'8"]~ekBAHBq-K~fNmz\ˌe(%XOhή/&gefrܪ{ uQ:Ep"N"&\aIUg%/t&{GL!7Sb4 3'Jo`K+~IK ^T2vo цzAk;b, jra#'hC5q1AfDbONU!b*#V;7>=v Yf陠Bn@+a&ao6$;;⌂}|!tK|Lgͩ6-Eɖ)r:H^KI5ERLJ|?ۗ({P2n<oQ,IX4yf||[ؗKpMԋ|}%ьKb-Aٍe\#a.5i(a"J"JdKcGR zyj(q0Q)`!$w?c7_aPg>.!$6͍=֟=T q1FAT| e''4 uvc iWJʥJr 6KKfQ`}=ȝ遣`ə6l3[/3mdGOQwc9uɗ>rhb+;(oÎ. *0b,ؚR)5L@Ņ!O*?j]|`?j1FAT)36VRsEX)O6Ի}'1}c}7XZ"Oq'Se"91lǬ6q6[?jy|eRra-aJ!<+Vˊ'3]`ѵ]lGy =b1Kei{h<2 Gn[I|E`3-RqIh} /$ù!<5?xx틐?36>C>zep<{7s~Kkt/WO}lȯy5JFdᤩ"U,zV&^Vao2v,T&aR^.̃іx' ˙bQ5\NJvz`_S)5fpG`'t5,& f2&esz>o~~|HHOk.?d7~ԃp'z7wIgB@0tZ<&⛀rtHol9MQre_}FYΊfiYSTT׽ R*X۸g+K-.ЌumGDfbbO_؏Ϩy/zҴK6Z.Q5 Ϋ/LqhXtel>wCF@*Eő/x-j<勒J0,19Q3d8#I1'ӱwo ti{|]ُ=l[?1u^Y6{a;BG!3bfʆ EJLja sbm=6 M6]Gf:jn¡xx+/k+0<:I[2k HioPg/9.(h䋚ӿc%lʈ5/|޷6c qMI' "*jKUb}ߧ7;EDkCnS1A닚CY*/6c>@O7Nydtg'Z'wc-VgV_ wIcޥQ?L 9:LAGӷO/lOI f}~OP 0viDjtIkgqO ~װ(/ >O >pB+;@CG0lN*X3]ջ㬮S0Qj8 כ @%3tTvJYQJEGLt49zpZ>=~ /b]^`_oc8 "Ӄ([.mLQYzPRx4"hT .1x~K붌h)8ʍ,uQo\#֍.w|Sp*E{qOGŬq%RqFj W +3W4 2 6hUoc`Hu0&af|ajgd3&>YzOo_..2/L$^U<:|-v3u<-զM]>O0 qw5QRS+'japP; ڙĎn (eD1a lԕ,%tˬq?fl73y _9C7_8|?ͷO TTo:qv?03Qk6pߓ""+LLRz.8u]:XRN/:;S3of?6B~߷[ZUlg?ϳ(YXzLGwmg!f #RI]N`Z:%lo$@ ɝoR)_J\[q|v.m3d Ħs])ZМ<@+|+ntǷ19 hxױ vHe~[ݧy=}_><ԬW}ӡ%Z)lW9;FϘVLu2Ol6:׈w p8aʩ hH'$zA+%Qƺ#ϯܡUE?^U6Pڪ`HOP. apyڪ&^tyJeJ|5"yw!=wQ l$|wt5 Ѝ.5èW/Ù?yx`ςk]'/Crz,E?}JmV_lVkT)P)Q)rh~XFϗMiK6 @YzGO66r5cp;hjI#'LAb6 ,Z.a^5)Fbp;k#5HO~b n9J%R}^Wc;UxтY Λ!,$hTckppkmJx/-NGo+8*81vb),&(ZRz0lNTns˕|7ntۘD*2$v@D;ST #vJՊ ֔L%a#P36t``K`SWBrQP=QCލ(^[t_r F7і"pIꆣLM=͐_n% GoAA0fH).l </7[,]ł~'/a(J<*Kbգs| GM<-oyO7\n8|'n6Mb>uk 3,Knu8J4uZp{CW W&B}$5ߍb#acsmL+tW+mCIUy"MQNuM fq^m*j,-be{>fs>Ik]-bCM%KRl^9֯_>?։iH^.SpNCzŗ"$Z<4 D ^P6&(WlĔh1o GO'1E2A(JF}:N('?KQIjH"RD:6>o$YuAECC >u))>ESӖHe%d7Br9eRӢ]olXjS}U'F:G*8)jyhnRrڏޢ!%bc-~SX솊ke;`&4[| ^;nQpNqf>LUy0wk_a%ȔVkbud yZC@ >*l`+D(0/_9d?-*?X-Hl*IN͌xjn eRaإUb^% Fj.l/j٥61sFܳoOݻf֭۷7]jD6 zF>L-kK=sBOj%o]p6D,z:u1R1ګiex>T(DC3& +8═7s5/^p_֍5mYǕWAR} :3'BlVnG<80hrBVSL܄?}LIT@z kX\r$tBcr7if hy1~DQTkPG sK\HC@3?޲?~3Zvϒ# .# 4gNXdijb !p$^L.DT࿙87%̨jOtf zr2o?{fpq1AzxO`]52gk*u/)5&eLq]=,;?N&M[ GE+Ҍ2ԕ P'L|3TFzP\3SJMhLUMYEqA>tM2r~ıtttm|9Q~苉s{h.o߸{A9ܪ'"2TR$yI#O J̞ r%ć\Ŵ$M#I0~bյRy5mf*dpa6(q~ĩƿO*o+W9h1 &2p@ px=8#5C+-OHm CC(#Gd 1P/uI(e)4pGbp\552njjw 2f$ꨘԇ*)D+HVq)MڤkƽCxtP.JVIjo+_qlJkmmH5* ͛^mZn: ɨiR47΂ T*f6p)iK3l1S86𒎍Hc`qo>=~i~s>|(^na4 kBV4#ܯp$M bO ], V~'6Umjq~b0yW t(Դ! ^Whpd7gg+wpz>D[WF$5.5ddJM]g7&)ĈCu`R6w_N':3@(Po"I3`ym JƦ9B`Zb*PZqRqvu3Se׸׿ufpZ&7:J_NgVL,92yf1Tu52GA\283Ǵ&X$ F ɉ5rǎ#K!g/|HY5 TdoD2CLk$t'nɺ?qf2:r7[:HP +WOtaJ7B*4Z$7y,"J15)Vl7 )gFpѧ@ !  pq * JS B%&EEAKV9U JUnmRJhPP0[+t}^#3q$ -sJDjjRue }[rI&[23F!iW)j=z i3j(JaL k$j(K[F$nf92|Χxى36Qy'MQpr6+­AQ{T3ӥuPbyO8bҲTAeqɪxѫlmFF=>7I&1PNE4=~ /7Li4h7.=GT ·աuFfٽ;:r3ŲQ/F.+pq6_nZXtI~ (OER,ᶳ TĜnw8 _7CZ6!0O8D!(d.GE.rb1Ak&0zas@!N8r{@R ME$AbNgx YPgjO̝JNSBF,S/>C唗1xd rF\ց\^N8#?}w-;ulF@Ks +L[ n\'L>z,*d}b~iu'LJhp/\I\W\Ken+p +Ӌm}zGB0 0wybfb *̄B/.Th^!l4#J.DkaXyɀujH"3YͿ%ӹOPed1³EԨFӤR@ TJF'T(ªBTl=+$*NJsfQ3ex/(Ćx/r \jÐrzbıCj C =I?T۪R3B+ Pz]=< 铈X#/q@ӏ~?[AFsNYjtZH6~7+_]{dy3Ǐq<_61&ecfn{lQ1Խy3aFlP{I`cZvy|6'e-idzLn=9aV(8Fx@XƽX$ +zw-IeS~)񤄥p#(᧡QF`L WO+`LF牘ߏgҫc/eU@nmӡ8th)^0H=XUb(ܚ*Z\ +@dWXv88kh_?0J&N/VzAҢ\.0J&NywfA }d:ҫH\*]@-\+hzs>ʳG֍"L4x?Vd d4೏Z/ ]#Лs v-P6Y2|@aM7TԲw! )W %JQx]x6K,IaZQrW(8rQ*uwʶF]NQ mfnroy%E1KQqIj2QF6|:XT,7Q2qR7<eǒ0K_54dv U¢\ǰ([NXT cJ4_]fTLV0&NYjl>F)!dLnW  yHs9qJ19Jjz44iʫO>aJ YY_ogo97#KILζŹr HܱS]`4P% A'"PM# ` _> B b|z zXG@ 6kn96]Ơ^.2(vAR:١?h{ o#ekF07֦6 z\&sY@azƏuJVL u_"O@&"0<{~سl $==h/St#aB:("5f!^kUØ3Dԩ KjY%! ք$#Ujj |ŽmCJ] & r,Dtx~Aѥ bɾnƂh4,-x B Lڣ<>Μ* %7ӗl6Et( z{0/*[AbAMۂ" ir#NTXȃT?u[a4‹7T:)$+̦%+OTQJ8O*%Ea8Gv~?M(Of{vo?6{VsխV|yV?mO{>l477m:FK55,֐Q(FDmp\TU$Y]ɡp_ƴqT8*1ȵx l9"W%jʊm(HS(T*[D2.螜g4)E@U,r@%T,!9-kW"6]Yu\*lbq|3u;^n @Hj; S1A:dGC`AKT0:aZԏ:i}S t qYdòp,-v(QbLJ P,rZTv;G|K7Q$qe]P 0D4Јt$vͦm6kl ѵ-Y S0ea +;ҨA T1XLkkaD_:n&Ž˚0,-K *6-`9Yo,vni:*D(j`hɠbVb_@.2\tdHXܠ8w? LD"ڶ jbXU ) aAt`QUE"^` рIAKq}x~7Q\:fc/0/; @ck{+WC@ /|kC>zM)I m8+kcòD2ˮG55SQuJ 8)Yel:'Au7'"8ؖ㨚di+)PaEPE2ı#ǔ4V_J@a86FɅѯͣ~w,һZpq4z^c=jwώwfY.Ƞ͚%Y.T3b  UV-J]b6E Ugj%-5ji:V$,;`rXv1b׊wUɵ@ ) Ǵt $ɠq t"B`v.fM Cyﶛ)h5:SMð$UӐ-Sձ kąU,"KA}U]/NsuHF;Hs%}x8m}!s֯4FpXG}5w➠z7+sO\gz qt^E={˔Fx2 $pwA11jz.Hu}yc0 ٣d[yX\/czNiEtП:<t}h AIX&Ffm'$~G<\4 rZ8S}TX(g]MExhQm(3xݡU=;~ g>ߺbҹ ;49_ vpu}}X+z;g}aZ:4tǶ="^ W{ ~TM񡹢,ge`,$kNMUV~ G!E10MdXyrT$\k*A,Xp(x(ԏF္NX8e 6=![\ч0vp@??@!y0u<ݬ;tguʛLئ4wTҁs{F;\Yxpŗk)ڕƟ+E8 YgJeLl8nsW;vgXn[R]igXzն2x;أwL ԽsB,%|&( O}4&mIT.@L*5v@^s>8g85 K0#X[H LMVؖTX3lvNÍhX\j4m[.aÊ(&Ą&bJ-"<Tu_ތͯ7'7N'A@y82~ȖP!ʄ? T t{e0fKГ ?ܘiW[FZAH<3NeUPZĀhllU@`p ; :pb{̬9Չuc6tztˆxƈ֠J&h=U$N?O'Y}ݜۅͷ <}HR}0EDxHD~x|N/)IђVDCoA16:|o7?6rD PH{}Ns;fEф*[Z:6o 53Vʦ̓4K|vsY {ٶmV?, r$7az]hϡ/w% mϟvyv(oDgLy|5{[yi'{Ac2+zm(k T3hPxk`:Ձ& : c6I#2_[<|!:GJeG$5TQ$-|N z0Yʟfh~={;1ca?3'Sopre#ӭ+<&b|o7{js77oaTVk8ww/|zVP. Xhi N6;ǧVgjS߻t 9jlX\MAza`+jO"C/G/zv9IxF7y6PFxa4x3;9)RTt:ԢmԢDD}Ydf*Ϭ{*h$-X c-'pnodMQdO`RVWuefTDxo.gC`Fp7;#kJk+Z*P٣R%68*-Ew闸jBr5sd/>ZIMVwjѲ"6 5bW=yŃޒ˸[[QKʾ,Zpk)2nSLJ(!D(<ڴy[w7ߝY;b@jС‹x[{Q=}M{PENMm;pS?|l$s&kȵ^z S,Aԫ65A7x  iEm"!m}Sg+3[vl۷~f٬ԊS? wYX>F;Oveun-zD/u_~a܂󬝣<fW.BE2&ͦ858`ž{Bݍ淋28M{ X\, W`$,0T.yD!ozXtw_\86M؂IyOθo[g|wnvkksܦW[kX6+[Xr]ewM۠i25v5׍bz.SW8YMZ l|)l2_M]wˤ5j6+nApg~Q9kn\?í7S&>vUms;(Wx]hגq7ka& ˊ{Bw:BP@Mzx!=(t(BnM{(hy={jVX{Mn.SÞmyJ8A$ q%j K@2`u%zh<uh@Yp7:ۍ Z||#X#Q<#یÄ1tZwDd"T0^ğ/0` l/.gZd[@=鴩He +>_H9΁XgQV G=ZϛV-V:ב.wh/Ŵ%wr{Q޲g]C-9!lPls(XnU(^se8wm>iZ n8\F:QcAyk,pK5>tIG3v#УMce{DܕNԖ|ke"V.#g,LaZJPoa.y-A=@GA se-{ed%2Yؓ 4~ ; U|2^CXZ{ `].Auܷ˛Xl~.\qEh?0Ա?q+_3nnd~& `{ }3UneR\9~?PDG1AYZH%Y48WjudqI 㝮~YOE-QezLZ|ٳ]WW `%D,F;bq !&3q*v*㙏1N Mk;d~U{mͦ=];GmigDT>+nmJ*Z#$E2bYA3#%K*Hi\VLj݊2e7!~"יlgg<">K V%0Xi !u"@X"IL6 %.;<GGwcu]|y*љ:(:fAeb npW*m=n[-On{-h`fܷGl]5_%=d sAVADLǃx"AD<_2⡅"""x"AD<Ѵ "DDW4PyՈPKE(IDg.Vv2 k b'jF&)Xedb2\wF֚kQ2VM>E䶄,;Í nTaIJL1;VOFjpHmJ^(Pm TF)2E\eLLF ĹL!)r")r")r"# 1A%e LLLL)2|G`LH×aq4}I>:`1}I+0u/0ݫ8p\1nhX[k.bdlPI&*y$B"Bx R\9*6NjڈThu-Es^Q)n-.Qˢ4GN >Y}H)mR We&MLmL%@Tq#T">GAXA9gLX\NFsLQK5y6 @0n \2x}z,r/E<]g)p +C $#ve׿~6F/,K2p 5Z9Rkt[Z)(ihN吓RSVlT:IJ%m*â+&!\tgIf8{vhW-`9`1FY 1*oU󼯎@Ưy+}<f͖nR?ozz^[ E>A>A>:g|6hgg|6g|6g|6g|6'! Z ټdTF8\@><&|6gEilE#*agHlllMl].fWԈK*"LerrӤ C&ax&@Ik*@oyMޕ[au+t LDž%9 !G!rb.r"G!r"GSjaEBiQᐣ9 kBh9 9 9 1fFΑXC55Wd1DZ~,ҭnb/PY(۟Hh\HRcoCqCbURs)yM;H< X$D.%\hXhg<22壶4GK躖OW_g]ϟPaE]xkY?)vq˨M>MxHN*LҺd;#QdH:|Pո]smϹU,+9Nmn6D Ndc 5R"CHbZ(DnkB6Ld8fr4kWTӡc $8 >Jb`j)dLȐm@ZJtH{ؙ&-3)[Wlܵ_ֱ "+ߞTߧIbQ)FX2ND/uʒzsugI|\9~C #o-zkKR'$#9h8]D|CMRQI )'wAzgZqNy8xsK[Ϯ6$'@s@H.AD7 rCw9zӢAi8T 8r1ʁ~ȿ/Յk\YsG^xrr<9XO,#` &Ͽ94\1#CLv.O`n?Vkc]Mo>Y0yzɧoVX30gv)ەj8t5iV߀-;HW-- n$η5ö6̪|i8jHHpe.ycp˭2 mB˦AHiXetʣƩ媸cڣh7 tӇU_|u7oxW~x}p݇S?9k LR(eۺ@]CdptmWMC{-Qޢ]Avovgދ)1 ?V?]zǡԞ2/ *RY~}]c_n7zWaW+Yv q!}lH~d;>h3<'nZ" iPVL˖KEVJS uѝUTX"VkK$"S4Geh+T"\J'm.:(4AcN*W&qO[B,c%a)վԥqJs*'s\CI79X/<;M:4׳T L:yy)XODZ&$X*05J,II-)q,%< F;;Ny&yGKM4sQPR2cR QD5֩9׌3e E3 ˂Yf֖pbII!+gyvYwZٓ;Iav(*\~*¬a('}"(iP,HXH}JY/#Y ti_u ? rpĊ, -$ ٻ6ndWXyٓS;".dJNvN\*\mZp( I)Q戺Ae'80=ݍ{_s΂F!DUVwPφ%N*qҵ3O)7qe,NcƸ d:XM% QYkvƵ):Ӊ”"]i!H$7ܢ6EJ\:y1n Ĉ`?6IB(]Fז9v袺13Z3ߒ2nВ kXDEa !4HJ~ltQbݙU~<|*љ|u:Nuz J@xI\R/y C ȕ!R D9䁏mCd t2p%wJ, 8 QhXQ*~_Al=jGoVΗb:5  ف-[iA<P-Չ&_$}4ohq?_W}bNύO17<_F?[{?._tp&RW-g0)W4"l]-\.9 - qC 4d-PgBL l",Y֗cW"b8 k,)nR;(/F [nkYo^|vS5;"B/>fUNlɜoY|I:ιn`u#]vm aSZr WГ ߼IkS!X4akFmTmӬ)Fy[} yq 5ö%.qƟڊN-aQ*F_Mǣx#u_tN5Z:jg}`L·֖/Q[^zc ںo+?wgF 5׸v8CJ+BVT#dz DeՊJɈ>(JyX>(JX!A)A)}P V,}PJ5!MA)}PJA)}P">(v>(JR>(J10Z0"L2@я>~}Td FF%7Nюiƥ\h&6F  'jm!2aeeh 蚠MиOm|Iz?<ދ3FCmr wIBr@jR9j9D wB|E9PPQBM2dو <(TSL)XHڀF+JZDhQr-@dwZX}Q|(б75BpXuXwQ3E"ſFhO5})qNsc3E }Rm?ִS0|4ɧ\ m2XTH18duL1>Hݙ&aBL}ۻ/v֟Jn=pޏqZ&?_NNn,&8-4ǽ~~hG4|h-%E@0=w>6 ]SNY3Q I )'hB>(9uvnXWŜσQ>|le͕lWRUxz0B- ƙ@ll4l4JӬ*;4G? AWs[nNsuVFnuqfvZu~6'<(O}qeѧG7n1ύAQoTN6`oN~xO?{sɛޝPOۓw?}o(Y%P>21oԦq\Ss]ü@-4mB_nq7rыݺ^-Hfw= \6Yg!?Y?J?A*ҏlM}EV[fzYQ-@9DZ (@sl2\ IBB"=a1Y!IiKQ]`+*Jt}]ʊG*Ut@jg9w;r I>=USK?>xT\`)jug^$vv輢2?vN _}^#]od7ᯌ 1LҧP1䉶w^ӛu-;)%p+MH*:Qw7S]=`B*Vm"pƢ&`c|B+2p2T*7xkJc?ܛ,eo7dV{ȟM5[.'ySW>_|%gjCIƉ6z*EBzCbh^ iiXng[ s_;ҜERVR"> ]ew"r8PTd|PIEXfJ¥, D'RԵ5c;l9;ٓ{F~[?~*v஡(j+\J॰> KNTJ 2T1.MQY=Abtb*8(5q`# .^YЈ8 jٰI%N&֟02F N̐%i w\`2!*kԎ6qt8TIg:BOCU6m5[Ԧ(B PK'"/ƭhhBh$u!k.#k˕u4Tktl|B̻CK642)bU7k G<%h" )QeQtDytgK0VXGg1:^t֡i{|VHKj~KlD\$͵ MGh6 }+xfɰf&(AsUbUIPe +a`Ea~MJ:.{LڏNVz "2^A3Ϭۖ ͈GHAZ]L:@:1 R+M Z^9lShD(HY!j4h;yp*H\@D ;4hf(!3@p&e6D1 s,PK_}BcmA979MޭbԠ֞m<ۣ͋f_DWŇ2߬JKs5-ܑ.>PHrCR"&y)CŝNPArp|c~ӕ F C%z-a'23J8~'X9*K/z!?MfVr2_ѻ8}ҫQb(MҚt-Y;n+7̙vW ]a+lwk7E(:,/6s-.^` hE՚>~`<tBPƹ.@N%eL*$3⩣F#O9ĺ7A|M2x;_wwQi ܛ0F5v|/"o;"g&S] 4E+*'vGZB]׮C<ǫ ?^P(,#;#-5F|12*\Nюy\h&6F  'jp>0  5#JW˹;DX'Ddy}$-e09n"Q022d:$n$R{ Q (=r:E9$8*?<_C3U Rӫ7]\Lq^.CfMBe[(h -@dwZX}Q#n<:\a?bkƺ˘9I(5B{qKs|Q5hܚRJ''?ח#_P[NTi+OѹD{# J!d©cLoAqbH7}xH3MZ;MɄؙ9?͹,zʷǽGL 6n9{,xE߿ozCvg1qm9G; O7O:Fk))r޼/sahsQ5uuNEݐrŽ&胒Sl]37d Y^>R?m[Z_piMq#e_.!uӦwhR<\YR%9Mz +i-kvT=iWp8̐ٔ7=]}˶p8ܜ BD$d9^ NoUY$b<~ ]߰sWo+Oejri,8!RL &^kF%/UcI(GsPǯ=?ճ_|WgoO^x euAN_ނ4[fw D\@>Yn{}]Q^rbMón(e䷰.+Y5[a~6LEQM^}+oc MovbmGA`~[dzkqxB+%)׬6 Q2@H}ĀKLC(= YWl+qMQ 5u{Z`X 8F$ mc#I5qW]GU47BeoOt:|.ooŏ90>kX| _e>F gV/2e]ola(MԐ!'$culvy]7%tLY8XX ENr'|IV3 qomEfKgJDگb5s`5&ե0\ćˌfRVXqĕWRMVmX>N3Dcqk$KQJk/))E;i~hg0[`Tx<5#DHcHwZ~ ;na=i%w}OBYɡ$ꤚtJV4 *kmT=#_&Y5r<]՟ `AJ6]"oaMh.2m Ґ[ XU2E4 -WZ(=IrJv+6LrԬ'qނ)%0nKOH>"2qiFG$2 Wt7𝮊e NOKßD bfxnc9vj=>2Ig# +ySlb4y|usXA0kU糔,%{>KɞϚɞ")(rcPF1z>ǩ,y\_7}!ϰȑ"&>MPBâf? Kp}힂+2̑ Coi9&7PnMq5>Cm'oEӋ:kBRB , 1+_'MzpCED 2=)lhw nI'Tb_4z˽CEҝbd,(⠱SVLeIStbH,~2=u'Ǯـ$Vn02T)-s^w3U$A":-6Hף(Ӯ4>ϯ`IH(.m0X}GWc'DTEHU?M¢eR>PѸp+.BP/Y?C fP\ W4uR`364ښi)ɤTZ҂ul%gFLmX~zlsbӛ|=Ũ".GlJW.L o"E 9MV _$^g]&ΥWޢ|ʍmTy6"8m)!YVOD Ce@$^rr۱oo xe.nsf6]g)5: ҂ }ás.WP -ܳtwxQ2<(-c}W΁8%8Jҭr+P ݣTrBr9/9Zg.48`-; ԅu4J>čj\kec)q;UG#1ߘjy rsNm7mwv*dTmpB+qb:>9Svi3WWg_KQ>ozOFSX]n׊܈5{ wݣն.M. :4x'7+s=xx]eR ;8 D@ė{AO9Tq'Ԍ*|3)rkK뵧K==Ccd#)X{Z{2]מKts5Wt!-8$(|"!dϯ1Sȅ. 0XT/ZxIx7S}C4l~9zzOWV|5 'zuu$,c8tre`ڥ bd<']AmTɩOÖZ;DyZ}rcL6/45^4RNk!j ~ 5}dZUKnv R(&*Wd8t`<7]:@dcd=cPX(J+iu[جTviᤦ}7܆760G, :(0^!Iu2ʌ|`^F4/C,b;F9(6N+}o ռo}2 aLB(a`=vAԾC =00cy ,TkJE P^-¼a}=aZZŤ$,qLY )YwihdyUpL`,PR FRS |`M래HƥkܤR2HabϜB3y\G$[HVjK崔73ߥ>WQ' X"+6%E2(X*0ߡ^J<)JyV"MC 䓙J 1LWn4aC }0O)۫c310n p(zg4eZyhoL`1CSX NDB!*%oy.% X\7 n [hVR.-<`TqBgd7r-۳uifW+\/DCPiJaU@XX4@,s,BS&|::mMֵw||=< u}Hs[Ft4Y.7Q|eygfn$ӽS P "eH-xRށʹ 2p9;b&ZUkR݊ʒ:ߤ|R''2%WҡzOnO"]xli>=!ןs̀@!LpS đY!ԥr+OhR)e #caϜH_r|Nmu*ӕʛȤ/ K4U>& ъ3"Xm (:w*%Q `J6zevQ 3B|`xRJ0ShCޗXєG$x.)ȭe EB, e\ki4@ 0"\1ԁcj'Uvq|<&]t@U-ұ2gw)Ͱ!,ߪ$8eJ =9Yyd+#Uף;+t$8.uFJH  I?R0SlC|@4#)Ր0L3\#[6}z+LZr'-tO6tK=*Kn@)^^~x6ߍ4%>>^RM\{$<٬Lv)-_JsuWoJ:_RUyQ]MZH;>dEEtr{9qxB+%)׬6 Q2@H}ĀKLC(= Y ݚ?cȠ5IB]U4!uTmpzwܕQ 5u{Z`X 8F$ mc#qasܕeWfշ77zsvk,lu{w|׽0`ƶV!MLBpwiKөHr,>wC9}s9$U $62`E R,wj }ޙ@3FFPl=2*>O\:S4䝅GL`]X`wC }0o+m#Iaziy5`gs<̠4E$-Yd"U%URՅn2ET:sidF1ђ q<3%\کdҒht #xP;Ksh 42B J&ӂ ) I@Y-G5NgCk!50Չ JgBqeJ"s"Pxe(kqI2$| >$IRLX Ɩ.# ^ V;L1QHNU*yRY`VI޲]FBb%i}(%Y*6;PxkM< +@Xd_ܷŗ(Bъm!gW\E{_U QM헋ƕ?a0b+K]-a"<9/UBM{Oq4C@ %C՜xpL3[]FA<缶iaUgJ541yXSdXi3Rj/xH|I @\0Kemw 񦺫rY)h0#CEqE(&5GZU4A{%zrĜ+\l9]FAb+RIZ(*C1 X-w%>d7O%'*5!2Ӂ}oDmi $րFEU] lR6;PxN+Er&"G3Ol(enye3kkWqiN31,J'u-Px.W!hJ 'NҖ] r[ժ.#T!0{0yr]#r'8oI`o_]}wfE>'m7]"k-{o-n5EIl\J@Pǽ&`Ge0SqYȅs2$ZC$!dRK.# ^U.ET"e69JAy0-.# ^3[ۺ TG.-!92m`e|H`6rN2B }i!j"U<d1.# J^bz O #hK&w%@;))XCX,in8"8"02B ♬$@mAI%,˖yjTQU3CQm>Px@="NzJ+ޮ.=ҫr-g*!J.)Қ,nR(",QMeM)eVCC>ǽ_:i;,|7idKamA#T>zQmN%'ZRpӄPOЌ? ԃpl"PR2.tAX"Rn30!+8 3k {JSzmF{>|v5{%0r%-灄9r3gTH _?RJn0#F/_˗Itz=Q)݇OXg_ΛM | ϖ}g|wÙ>)0RΉ牉BBR Ͼ?+F{]W!+Y*Fo-&E}r܀/.U'TBd,Jbܜ^=eIMB$R˹ ,cڗn)SjEokMm/dJ2[k?FQeR$eCH>y<4K$&`Q13D Qx܁1_^!O<NǫnwXBV6} (aO CӱP~Eׄ4&+)0FPP/2I׃68Li H2hϷTh}Ncm56g{~ 6(e:) QESvI$p&l'pInkR :D(Y)۹|]ȋpogPgOX~#{̗hE&lkKkH=#RDpLL}44B sfBGZ/qsQ!JV11\Dz"C-N}Sr3o-iRH U\ 񷄧˒r齵k="3qƠ]x.Mxe'8 B%%-#9hym(thI%deXDlr`B~Kj =>veMgVih-F/p=3>s瘴Z%Wh\b{)GSJn,]Vd1We#=<<4%`Jd989ӽ: [ts*2RZSl"6ĪjQ8.΁|Ju1(u2 nO9rlzsʥ jPr xiBAL#nZ9@/Wq)$$ Cixof7*e!g:ҼG~B3߿ _e- C^ Y)I7%k^ >] Ɇa0PH*PRţ-ڥ`5zzYTJn *܄<51ov2求p S SXe{ˋ*?ߍq@GGBGJx8[Owge|3dsm\@ >.]M':}.q8럦D?|ǣ$Mҧtԃ" Q4Ky>!KG9㸉s* { ow=ʳh_$t/̻@g_z^yO ܛ0[яnZ =EЏ5jT{$Tӂ|h"$6S"-]c&%8,Wա{um+51]{ua!Muˬ ى+;WB1_ťnwc&.6 3~SF=³\3`f{V"A: :hR> wrߪ|e6dc_UsvtYyX o( 3_dyNP{.c7Bx,7^]z7`L[~0_OIZӭy/h, >2'01dZ⥷$s}u2U}3ɻ|t 5)Z)CUYb33E4`)SRX2+Ss^g~D-( >L\'_ilΥ~p6ܮ]/az}UF-g䩂]^Nl:Ư~((69ՙoey<ˎݾD.&>-ϝA ĩ, J#1In+LA[a͓񳹈Ӱn:Jxckc6eL)lLe0j-WрDAts+]!CZ>514d7KB餑aR]K=H@qQn)S[?i'2+O+E='q{:%=T_NìqG܌,(w HRʰEJ[yS~e3/aN(E3fͬ$C[elv1=^#kM:ǶZXZn9lUsʼ9mv lotDuIW#`s٭TZ~ D7Wo^x-`rv,0#ݟbe ruX~5\N]q祙H=o$cqORYomTɈA3 gV(e- HjZtٝ #9Q9R۫a6,p.>kynոGcΔq3$"1Yg0[U& !KcĴHXG"\JlB O1jFeQ µDX/5^#Zt\gᣴ1L3 ֧o"|nçaX6F4|_O&ZXMcʙ` LH H('Fc+J4z$A:W:c|-rPYS"]NcRYtPfiXF"I$^łJ}ôuq^Y+d"hrF[ZE5ȓGƔI9R5{Me~"EZ(Dh`F9MW^VR2bQO$@षԮ"oGfDt1X16489 B{oघ tIL"z=5cj?a1kALG ! ׿kC*ʊh~z0(#F00N-& h`b[df)nG C"rn~RLݍ0/7 &uV~M4 VUtb p,.ZFCهCZAaZHX_3/+t'1 LaD' yj_b.<1׈ZNw*,OAcgy\ܗ?Td&hǣ_ `:([3wfPbTRJI"(MȐZ2RZ2և鬰OŴϦ/+FO4Gb)k%ֽ+cjYsґDlЏy ! _ӟ yϠOF7z0Gw|o?~}D}v?)`)U 3u[ފ4Ui| .Ե5tOt gv[n@vlrtkG_1P$oaMnP7e`򩍔=S C,QWKZ(*bR`sdbI PC<,^|k^hhmMlP0CixGs?h,z~oʩVvSI=&R$E(D6/mSV\gMF$BF`OX˷AQE3ϔ dUPsUދhyT\9[+  :P?VC7TZpOOTQٙBBe,YJL$𷠹[$!R1[B![ "B00\)N *ʕF9Gf#=|eK: y Қ)1%.DpCLDl,,xGI4 GZ62yU;nb`jfEgX?<9ui|B@#݌RKp`[QG'P(5f&`̝$epU(oޮAwnXMM>r S@|7oT)g7̖D$8}u[pe,2̘!$3̀geRһ2\$/du .d.dSTŃ(r/C QJ ^cs0!Iu4D{SPZN58A-m=3{`m/hl(6@Lj[3t>[Ɉ'p&۽l7e4 ` bzFxk!exO%HMDT3ez[ AYdY¤c\f{Qf0M jU{zP5asR%aNPlxb%tED9sT0/5BztCUنКfBVy=̳{ϩqC=<΄e.7G1bR+3aϦ^.LfӢ=V2Ua#3(b6) 0( G!MgE PRth X~h`[eFD1+Cf1YQ$Yj?kb=fyDMig.)Dig`ykUIa>%{H KLAS q-5?f&Tp.,~z䏰߫E R@#؆,_:eL˘\^g3G ,@z6O+i2n þ;b~Zi6WF~bXiG,vQ~՜[CȰ.c"LacT] f"[ ^ 3NOd_F=U7 рN u$rY,S(f<4[I\^/"L.7kpvڡeQO2Ӽ^bMIX@Wee˅Uw(&"7/Z̆RJţ3//6<W*%;*yeĮ'3#20]0a=!`}8,dŰvd^BL]cf#橛Ō3bιv*\|hq?xhTԤB峹5niW|6ݱ1[ w:ȖmK*BXYL#"b1h#2u'`;XxzxRuVO?|>䟿D8ر;>j;1j_WiՅoc;y-][s7+S~S;Un*:u,M甴ltK< GCE[n?P)t,\EhqpU]I+Ϗ7nWrͩuVv \'0KjqbuhsZoTTVrQͰF_rv yZQJM)E(.Z?(e,EupTFQǑC{2#q8Sh## ҄%,3%\4dQ'1rvJxIbƞ'fo61*>>,jOAh)~-?٨wاO:#V+.9s`#I.?|hq(uP9Nj=w_]6P&$ 0}$ņ LhU3788F0ms93f2vu|p7d$q$fQ TFjIreDŸ&F$K*Hi\RJ[s~eNkE+|aA<">KCK4/%D hTh|7"x4'30Vx!l D$r' 3ÕɈw)ѬHF̛"r<tS)QK/V Nbie09 '}eC%*}qJKЋ~UsRȺi` _;Ǎ4ᣱ&A xk"&DHQTG`\=@ǫ-jCPdMot be $(o$W~_X@r.K˹8D}m6gbf ڒF7RRY59wluq^`XnO6?9*1;Jo¶TƾZ<IMP2Υ :$Iu&AU wy Sk yi OEI{njvgĚeY}e-M^#| EwReH%rّ8DICH/:~P(#GNY&bW6 @PD2fk4% Bʐ%@ n Kb(dIN} xv[a]]_A>Fc $8q/%nq=7SĠjJS&@dH6=BRMRr+"pX)<5!h ,FAGf91 $J!R!+(%ȀS :8Ȕք3)wBt]Tޅ٫yu4@UwMU]v$^)!%NGz9rKb¯2@)\?u+?+[r !᪨wq" Z/5hdA=D9arq甇 7>\Ƹ]@זg JU)XO] 4ԁ :"7>uu\g8Oz?.r8C*/x(D7evۤj .?*#\p"v9m9~G=qsƄ~_,ivQ]kg7l1RVܹϝ^vnnv> Nx?, & o!kze+{ruOwakFfYޡF g0b^a:ѣyx k{edsAuX0ze>Le$w,}~*qI/hѯQd *_:q:[dop;~߽?kysרqW.֑@$s/>s뿿 MuMF5Mz2;+HC^SzoevLVϯ{(;aO>pkM;p8 b~WY'L#3]烘lY '"ˇ #sA!Q"ӊ OKER'Q[{V-jk=帯fe5BIK9L_>a_/_k˥/Mm[w"$Q1NwP  S]ywf_-?gx?@LUD/(18-(K$F(Yw;+4qG4踧5a1Nm @TX?.V k*Fc.T݅U&޽`\h,>VݕȍTe03@2'K|~l$Ǽ ϣ[L&oixrs}3LÅQotWw4\,S.s*6 N;i 2l/g~" PfeyǝnfObz?V2`7x''bINP]^#Z0X[Ʉ,[@,R4{Y' .+ #hmKO*9] X\✵^{I^D9Ų/I3EGYL\]A]TKЃcFxm q\,J(Y i4<&l#T謏 } Fnd/(=3{`a۝_),_Ux> Q_'m$^~-=}Uv태Y9=`:0^/0N@e/UFYjN\BR5:*i*62-Rr4ѼB* HD(؏};*j R" oqD6f\Y;f:qΟϚ3. Fѕsq9w90T$lcBH]cɅk'ATZg d"Kƕ^ )i{BqۿN!ၱ I&Ѽ8 Eٓ $+gsTZFq&)c:6z|brvp?YAFm֩jk"`<>p=´EY{s6pCwEt. &~{mZ!~fؿ*u0/z5xk27ñغ6ank^#eFكq3M+'#gzZ=uǔEvjԞ;7Q:>5d=lԩ1kcvxNJdv[5Q+@ 놵yU8e[nlY'0sѓUg"wRXlWC#L ky댛H>j䘎> ~9$>LY,w5 \rnz%y@y3^˸8Rp$o`͙.CHPՉ%>P 4Թ2i2 FՉ#I+q2Vwu@~J`ղ.4%%?~CR"eDHw ˦f3=_]Mqr.~iWLG+<,/Zm`J!2-ja? -NߚYd2058SlLgSj֝l!Ba>%a}->.Ng-LLUڂ9&.u[Yoݱo? 70;3[3ΖK֜3I`ۢTl.>B]),e# Rj [Z`]ZZۺbs'dF}>Ÿ_Є^PNV2#(~A >hQ8_=7%Qɺ.|pC,hcCNKdT"C٤Ȳ1eKc-!1EbdK6.,!l;)4*鋱ReP!C2dURb8 hUS-܅m,mmg1yfuvM,AjÙ1Wv: lPls FO9-5kl;5UH-:FxϡKg@NFlXhH`tB( OtJ'l8k%Â6hJ4"E/@襴Qc`d u֜f؇ J79RUd˵WV|80ʝWZ|+얇,Ou>`f-7AF˄fZĕ&K)r:C֊6 "ͦ6Ar;厼ς; QPQ RJ!H FkV%"R Ҥl QKN, Ɋ̖uuP<8 eR WuDR"xձu֜_?^j5ܥuA9j8.X &U2BNŲ@Zh#Ic|DQ`]E?ˆL%1XZ7MGl4kdLjDèkZAt3!O3_st}kƉΆv:` &Nzo\A9?8Nnjnnߟ6N$,{:OU134Y_loxZG w:ޏ a6U`m"V .g*xD"XvsntN={)יk|c ;<8/ Ul(2D-2yBO0Q#"̣;c۶ ~XuDTO=u@Gwx'^=S@4许wcw&󪨵3+#~4;󀜎;ۧjM!yeiC]=ا";Dر}"_!^C}^rOe|y`1Uבc;XE>(&x*ndf#vXPJ5'08Gӓ% zh ÅZl}8H'ZEQwn, ug?Yw~"ww>k۾,G'p֝M+6sFσWmz9-0(Sz4Z6 s݇%ۺ~7?O~\l!mcSMu={\}&]}Oܼ?tP+<3 !41ҁCK 12?j@J4f95nr-BIA/u#[,kZc\}Im { b F7BG?}hz}34YV;(Q۪7rvu?/G4Y;`&Qjv3̭_?Ь` vsyG;5hT j'kjy"kk:@r:]䈠#jWQt"F!Qj4%.DcQ&N3Ei-e=:w@[ `FIA (â$'gv֜u/6 .bys/.~eRuw/*m០mb/6 )iHJX 9u ۍ.қvomZ@Z%{agxށ56|oצu#̮@'):p/I;M0E5-5LXM8vpܼ#'9q>Wfw$FK)cH O髊f!l *OBQz`o4>g׊$N?{Q?nFI?9_^}X@ #Sp "X\W(Hkt2*$B*ǜWݹU=kSc o|Hʞ))3ί\ 3_G0]O'~L$v_Js!j3 }]Ozh+_M){H*F)KSGB |+YQuaްvoڽj7,{v?-[ R|"HkXaYX57etO(>O߈QUel:|kYnm=Q*.wZwZwZwkdq4CI$Fi(dE$gA*F'NЙ A%ȣ׷]SFfCz_sմ{wތpGlly }f|Ï wybcKKZ1$ozuVv2LçW|W՛W%AD-Tm1&Ȑ/_}_io3331:2kn(Vϒ-le j~ BB)pFB3%:CD"[0ʨmRAi6X>֊5ie:IhʀPhG CIDa/2I`e ! BRI*LENJϯH KI*%k碥M-K8 ȨйDj"JEf) df{1΅!pdlV.xYp' U סZ!U ;EDtR[|a?,? f,VWE Q#O r!4䁽d UJYd}Q0xt4{:d\̲!iЬ7F&%k?X\ y>V-dL3!RZեEz.YsƂ.}NA!]mXK5ك%ND39H|30v$YJ'2Of&o?EmBeF mbιQ IG94$Q1>{kk Q8-f`Arh^cRrxVB6 G N8YCI}s3LhNp&RL7QG߰FBL$7L|U_*tJ0 ȠQb0*#gh84d6]XTv{Y%L|q_ r#޺YI;MY f2~x*QA1L c CgOp]:`TA&(o4ID{я9|[or@Mi-A)]S@Y-,S"(s.Xc*0CE{:|P3w.Ebնͨz:ZȀJViUG(o[S J@C|BbL%>ԵΛ 4:g}KPl׆\4>"k&RM4ŨHX>wa &'܄yTz݄)NmPݗg+NymmIbGFi@S&` r"D*O ?E+U+Ĥm D@d*: .ȰL&J01/ًt֜7 YYף,-JuEg;MYWn8a_|D+[n_Ouˋ-ݲNlu)3M4)XLF40.RFY[RY 9cs I+4U.R}Q&'NUIhp8uvJRZ luZEE4*T%cowLX׌7T%xHG" @ vhqh*J0.$t*> ,?Fګv5,FnYśR@zoc3\V}RI~,x3)LüoFWfF]<+n)/{Gv<7bG[d}Չ}]`@ L!C"r1GSwa:\׋z "Lt=[_$*:mqkN |{`j 6v:^i颻~T~ _Gs(EqF ys_TaޘkDIMb \wW ނ5]¨6RmSz6O!Fk{h*j2x'WR}kAt6vdwZI!3w$o;TwևOR$+>N.s|=^x9Eyu}+ciьZ~XHƥo0QRfmTe{:Ml?}D}xX ,:[}SS&@۟154*6`hsue(+{_|2Q~%گogtܜ;ٖF(l/ b~}.YekV1p7o<ԑry􊋠R(F/l9G&Fd@i;ˡ΁:uXΟ}{Ӟhȶ'6 hpaim c3R:8У.کV.IiƭN9L઒t%J#6WE9I&R,*!2Jy@tߴd",֓Ϗ!IN5vz%_Un:)xDxUpJxvgT:@U T=STPnx:*hjCCКfBVu?¬8suG&(DS*;x$ŬSW94z4/V^\=wVB}xgtǥk(uV~G{ӑLG eZIDiDk45[!-ʖKN@6m=)YqeSs*ο i^~=7[bzὙiM wvOם%lhkK=<ɝSSd&A]^??.&+hݼj]ww9tj^ͦChYA˪ۭûݵzky@k-wTX.w]m/r}[hSm9f}.n7ep?8MT#>S!H?^x~j2y[#|cIn_TǏI7}c7/_a%曱KfqGSNYjAë[6#z!Uxd!R&tɈh,E0<)cݎ'69@ˡ!g'5hiVvqmڪ?m<<0?jYeEnܜ`"{&CD{fhp%J7bi1g;Ja8Cn/- ;-a$]O9$Zb4F)X 㥦{gJ7cX.d,y:򬋄őb'_w,^4L=[zav)7GܰylOpyz?|mdkt xХ-wNZn+^4ai7-/MJw%V~9A|l9t7ϝ=l3<.9nճ@Rd:6~xB^SۤKcLm:R; ݟOZ;BD{xGN!\/s)o#{lTvPY/&}ƱS#۪cyl –T ˽)c}ꐝ@,^¬r_(" H|bd+TjE-&x#32x5sf"HaY{-ơH#Wii~9m$7֮=mL&&W9S@ fQk %I/P!`6e.Dʡ¢}EY1 3 r[ B# Ҙv/Rd+10BB23uS/IN Y$53Gc9d'hY$HZ甴-a )_DQ:-ɤ0 Cq9/0E. Ņ0P\Ma(. Ņea(. E0 Cqa(. Ņ0P\ Cqa(. Ņ0P\ Cqa(.'0P\ C+p,vKoNIqlt'+f 3Rr=.'3g\0m?oTsOӬ&j- c9|h!Y2T9ÄĞ'*+MhU"1‭iU"bK(F[ :hD*4fi6!-^>'vfu;zck?DɄ׻zdž>z%_4n.n;hA&iyAL1tӺfXjzd AYdU¤c\V{_Qf05jp T-P@3 i$Kœ"Jd D9s41ijBUSʓ0ȹM tdXD(Gk2k% ~>$3T6P4t_hMP3!ug/.88bbrƔz2xKzX ^:[l|8Ot |&x^>xi#ZޥRlYf ;APIIE!8^>[OQ^{x25}&ܙ=]wVxx(&sj4jnڟrI0 q1aݩˌԺj[g'ڻ2Yz^ws/{^k{?n-?=?~Csg YFGbVPYy%Y%d]kgz,A;`˨,Pocx$phTӬ'VytWQV{ޛ}h6k+#=A~)\=`bbt" +M(1rK04ȔUΘ*2<Y%ịUh4BkM̨3 h#*$;A Utdl/W &i9{b\bP>Sf_Q;JG%3yTYg&vS 5XO)!A8.8&NÕLĬ28/&XD5ҎAI+PR2 f`"F m!M$f3ˌ3q!b H_zF&Oy eC/eFگ&ws)CI`$^>Q28wʈDƒ f(m:bD<4 :Ʌ&UwkIpEh5cIILp@H1`Ἄ 5< JBFg間D$2 RjDs$rŐ*"if96rշs3򯃯\Z;6jYdj 12'as#)!ψwQh9ԧ- EYD |~-S | ͜LVeq;R `#r_hCW8 bR'/#&i8#bLFdاL{-%tQYG_\ζ'uqCx6yaMAx(Myj4ÑD6qLz_-_Dv{T\/9T2Fq Na^D 6X,sp {džAV#xYqTd!JFy; ɰ:3MɊ=)) H'˄lrd&ƨx S Oʛ1XOrkk/9vVR1|%gٻ6dU' 6#}07 dxo>%BRE1"Lۑ죺_UWW;Afxwav(KׄJWR(|ytzU{ la!CU^Aԃ\{B˻*dSJ}mȒR"Cd~C Nl@Rw-c1XXU"L0yC i$TlV\ (W%6!`BLJàFe)A@kYۉ,mMaa)ʟ&E .lbXr+&Z&n17)JH H('Fc%%s z$A-2uEϒBւ֧<rP.>J3K`@ӏI$죘cd(`P 'p<.k!ez`rmA"kհ`ʔI9R`bpCE:2MM)f2@!@3iB|e't`)0|"' vi4Gަڈ9Y xs'\p3LRfYsp1 iQ)4gWG7=_g߆qíMc vϦ \VU&g Egio+Sxɏ!xUpȏN'gu ْ#b~bX#BUޠK)(ԝ(\h 4u^6W$\85!=ȥCo+K{[}#qbx? P2w1g7T"er^M~+lW9i\#F`I&N)D҆[W]Jk#LTΪ) ?O+?v|UbO'sm>]^UzF۹ QV oeߔPbQv$Wtnva-ŠO@DdE~Bluoaqy"׍n)AҹFHm0tQ1Kj lP|N7t*'~:Ǚ\;~Û߽I?݇o޼p׋o,@qp)p0{ ?5  ŏZ547*`h\uM2׌P|[:=75Q>7txz?-TxWRׅ+0mi(gYqFv׾T!O1Yʃ ܗ I'x퓴E8I2LY`\I9@1)SԁҘEA"w+Eb{Y$mP5!|ܥ2$\GE,f^/"haFK[ΑQx$YĩuZ99R )gwi)kmQt-v4 $׃|V~ ,%_*Htr%\Јr H'(`/)zpU,2s^)a+'YM*u5̏x.CRq+>[-Lsmnx Lq~zҹV02|(,@K~5W2,(.h핔O$QխդGf tUM֕Bg0bW^ٲՔe,onF%86E*@Q|1.MuY|Sl`eW|r왽*R?lUOPܙzÒ^f %| L TŚ>b[wy nuɌ`O}DR}9}6YlMN{x:Р西s\ڌն6}Z|V[C h±f+օ߇3LQt/NN8w4 Eќgۖ uߏ]ШpdViTYӶ4t[BEe?cX9  w2kkCۆ6žS3Ѥ #L(,%oL\fEǪX44/U,8"cShE[b]JJ,(taĢ\oS؝bmeZB$d9,(%3r SzyL;Ɔ 0Tge`1(³_J>F9DŽRn&x92J#"( -J))$t@mOhOϖXǨF`,3,8.|S q;@gJ/P`y,6h n~ȥK뫻K#cC{Oh:@r5|@FO͞Jr p}~ n/Ûo:]h1u?'߿>9oN>yfZϫڤ*INY k "Z)l.\L˓? $T׶,2(:~[bJƙص%5"2w6QMs#qȉa+#$+DpNJSHpRRňkKL6ř@FP؏Ba/ |.+ba)WF%.jD\7B{SrMY9ɍ]ѫw0]%BF!h3T1yE3 F9 'HrC4㘻`l&ZQ!YlDEY{"Ŝ 謔TdʵWr!(M 9822!vaX$긟X[;Q@4݂ fEBk)0'(H BdsopDKMm%_xl/ :_`ڽxGQiڏvmYNį %[WPG:x'RZqxx#kt67ߘ6wzF0 bxOuyzs@bi-/fjBZ56vyku&_lwyw]j\yϭ͟7t|&'zMNq;Os nKW?g]]ˇњq.#idPNjM<{:K?ˤSָZ˧R2}: sI W>+(5A7k/90bQG"`"MGRh#2&.?) 9y]ciڻCjw]vA5ap*Ufc Yʹ+ZKMc2~3:(Kp3"اjNLDX,-,xGI4 Gڐd;Y0S0w J9LݭD ZlQ lXTqf=||Y'ߝdN0;|>\; ]#Wg,XPDF"%w̐]SUU"mxR"&Te/\F yH\EE3B]HAմhäey{woloig{vvcks5]?}{KymYdž)vbG(@t\"ZWwW  цک.NEъ^kqx\Ay\S\&ʉHۮRo{Z 9ݾuL/nK(Rt"HZ Hq)p|6:JZ:uD53N0dETJYSX?n ŵZ:PXk Ⱥ7q6(O+NGi7Uz=ҼԺi#CQMh5Ԇ^WښRږ\ЪL.1ٜLk9W3\\y`(x{=?ei ݓ?3{H+3$ AHQMJɄuYCp1hlk># JU`3@ ,싄XjcƜiPQ.ke2 r1B57mg (B>l#\-ZZ'ma`=V 'V?wl#Nibg҅4^!@n` h}CH^)7.»{wy,̋{\T\vcف y{ʛ,P(%KUѤ~=ۜG!Y2K7&$dC?\&"lur*pP`Cog ^iM57uh/[x@KF*,P4 YjQDk''`Y$P77%6u'^K߃Í<ს.\*BDY1R?_Op)b=`8. ~*`F-`0iv^[柣QMzu5_~mu3olH"}>TkPkڀ8u+jGJR]y(7m>N-0+f7izP͙0:*曄 D ؏m2/ 0\YtJe6ru@!d}5`5pv,|Cyy+N-ĬkMtouy٣PxA d㕉VtvD)jJx qȹI9!lx1z{&9ݴy%W~^ 6猯+ 7oSQm |~y "rTAu M ?h+A[F8r{TE\GϲeG28@@#&g%.{ŏ־t|ugPj_/y 7Uz@L,ZnSЧÉK"E/9jD*+SSD($$4%] @ $X@jq itk<([t*zKxVm6*G{l7g`Ǔx;B瞬}}61o)F;,5G}Sq!A i!'W̪ƖZ#p !#*{Q=LOUR;&܍Q4sJR Ѕi542 6&{ L:"5BDfw sB@6}C٢{=Y_N3Iށ=u׾y45-K .NgsC,:͎Ү7Qq/&3d4 [Ǎ@nRz#GfDE:::tQs fbjgD`蝳<\*sNsRLKs'c=^!v"C)j hcR(ljP2dv֛8[YkW쓙3oEQKwG\ ّװW!gZyIy#%)fCF"rPLT^h˦@&bbޱGn(a9xV (Fd T?]#E8tЄ*jٰI5NZ~Wf""cօG2$l̼嚧RiDHBZW^;ݛNܱzֱoF۸io!鐵tғ6N*AQY;YR2GatTyHZ[bM!҃"w5/?0YqrªngbĀ`Hab O3CЁBPͣ?XjЛ:8:%A/űF?쮌H4%Ͳ@ HɒP&;"XmڴMC\n^t;(e!@\s>)*#~ @d *& U ZmrwKuuҭ7LI3g[kRHzJ}L9NBA+_cpE9 P\mV>x,Wk^*ZtSCV6@%4+ʹ&8fWQƔt21lFl tËiA%]ERBAWXa%lo Yz2ЛCɽkU(Rcha/d<VcbۨҞldѻ@k) JGSh1Ýb\SƘ)EbvNy?kƐ L3% [/HUARKhR@V _C K ^1̹S%opJrDFB Fc()㇒(:='l|SK_+ė@~䇫OM;S*d*^Ȋ0/ď=8XUd`m2pq:L,d~p-,)pK@IL1C\2qpxS< -g8t_K聴*GaHSP&2·ĕҙ3iYypE}l7{pvv~_Uc,VWѐZX vA~N.aT7=-//)FM2?=bv8f 9󿟎d>]Zh0~7~01H9G:Ftcf]Yޑ&fbrcn9+U稂>&FWjiFoI,_}18ʲq`h_@eM7+ݧ퇟p$%p ʚA1X)xr:Y,Rf#J HYoT޳ 5o\n]rي!YL&YiڢCs2<J1N@/uzyW{Ӊg.vk7]8L7_ pAzҫc5'' 3ɠ)eh;8rS:Lz^;*SO~{^U[=zP;6NXug[֎{TxeWxeWxeWxUx}yU)^տ2+S2+S2+S2+S2+S2+S2+S2+S2+S2+S2+Sv_FLLLLHe6Vyuys^W;ys^_BZ}dD{8w8kY)oipTti0%kig3O#a7i`OR?9]ɭҖh~b5\Ot䭟NЏx6|\d~Uwz1#?|w/i2w}=yKBrCKY;Oָv'kyv'qЁa?ɍ ~<_L[J #qA+E˙u1CNpf +E`gLB*JVv㽧k)'\pHΡ]ZET~-|r/_uG@kkE}Kڢs7-^KBkC:I^M_Y:wR%. Gpy_/?~զ,/8sZ|P˳y>3s2SbW~# 8:-qy|1Yz><-{?_ڣ&6+mdp^3me@^2I pfA]Ȕ"|}VDlRKe10&j_֪UL YZNwLb:LuBvIg]r 5I^|ݪ(nw^]Ƴi DCը]dgQ]c+x3 ̷|*hbosNH?9>qԲ`Z$ݜOpd0vT'<6u##sz?aXf8 i^˚?AC}V9䈸oxsJ|RQo>GU?HZK4$m dHU&meY2jKޯ6[OY$H?YfJ߀+]mㆬOR6U-,D͖@շ`79B4Z絥dQM%HRJ=}I.I_(!9!1k-bm(bJEUӸHjNm#ݧ~Z`K4nH{Q6B6Q"`qqmMT$]jɸH5k-/dGZjC>yj-%JdITmԭR :KZ:Mu| ՜/fL.Ii0YLuL-$\KcC&k̘Ʀf`f j$ :)C)Fa]W/w_apf ?ZfnY!)!d*-҆iN6G#4) oR!T5L2@?IbUFҚw-"9{& L \B$b&u4bĤ!^Y.bwoYxc*i-V:$ Y2hOBUg׿"L.Vk GUNȪs#DE)Q̶h:xѬEkAޙ0Ш#̇9H01Dk09GYGW(c"_GLH"Y@-r4vPt'iZY{E/3ƐhVbP[bӮ,"7F\r^$l, @ БB6P=V \.%Ɇa*I%CLxe@Iػl dʑƒK-OnEE D'+sVk!N=0 jvVyϸnD)RWMėy$܋Ɛ'-s#ڈ&t9@\A֡٠#. -!!-CHLb"/3 +pAoi5V4g-a)-%7# P74Tkj]#(r3il8!oVor3E ] * +V#IP\6&.- .a#Dg;@z= `WF"vlkNq2M!,xٷI9'vEC Ä>TK$*TzcT` bX@c+XCeҡI  m'+Tpo*>b8%a7k,X \Б%ᐼ 0|R৔@HfŮlB)vNeW㾇.ٗ5 N%T0Kէ :U KuZ4<  JFTYl$DO;29xZoV'jy 3\6ތb_"B_1kP40_g)*SNNR!'_0waFY {sv.>` K_(9@mi>j$\: @PoQ,j78/jGh-ԒX()WeD䓢Z0҅yca恾SA^|D&H.: _R˨չ< om Uj`un(@JTU UĝCVɒ3Lp)ycU0>ۭ`al"&~H429MJ6?ce_!:1MH `f9v-A*RIQRp ho(ݭE a uѬZ*A`ȔZL/m0DumWo즷0:/OzzL79vfM?0hd0u0 4tLGs ncJ`aؾGώ"'2Y]aYgZ Qd0%I3wg#5?& }ˡ0#Bျ!(A/Qk CC*Ct`eSFuL!j~NE`$ 9DG \Pk\#\R!wTu)Xo{ P GVv~7_mLJe0GN[]3~~?LuS%M$CĬmN_hV{i?F."rμ&%[ףB k@VJSok~Ŵu;juUSާ d!wxʀ%':uhءЩIGO_7!.OsKW"t/yc<q8>,ǹ&:q'q'"dcx>gi*no7N;]{Uw^ZN3WTFfw0[(fkxixoY|Z=wC W i{nji~|qy0rPzLo.#´<+rǔ-s{5YoM*M\3Cdl)ۏX)'[Wt\X̻<ߝ^;M ٤8 m:7[-|) '˵#zy<|u9|#;cVF@'6#w]CoE]%?qxp2`mv^6]Wz##}#WUb{Od]^&DFjJR(] QR TP&Dϖx^؜mL l^}DioW'ow͊]f?ڽyŪLءæU STIz&JBoknAn@7tnp nM77tnp nM77tnp nM77tnp nM77tnp nM77tnpӍ  ̕4݀nt+7nWihV@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+~`%פ z@0ׄW^k+~%SPo~5YY]M~^ @=[}VkS?Q$P:muxZn39\t(Q ,4jC %^ku+^ ;弬mh9'Py/~`?Z>t /<y٣`6zqNe{}%G-ўl㶒B -K}e$UAyJ.xG:Y 8G5ml9bk`canl_{-Qbw> Zȫ nw-^8v{zZ^me[w8Fn#˻f/),+YuVr\g:+YuVr\g:+YuVr\g:+YuVr\g:+YuVr\g:+YuVrߙr]>P *߀jGL/q/\B=,eGR>B.Y䩗Zm)r|»+O+w'?oUj^!z0~'ɿwۯ?try~Ч"{$q";#(euo~}ΰPL%֐?u*:3 +X (V@Pb+X (V@Pb+X (V@Pb+X (V@Pb+X (V@PbԱ*8ԗ[>ح8~[[6o6;Z}'?_!~Q -J՟#R}ԫl7lJY# j,fr9hBhD'Ӣ]~klY%2BO/se F@ / ^#e8"x^&`Ĕ|]t"J^VI$U 1$u8bi-3"E3^g9 /Wh`|q~:tJxyˎVP|y=k޳q$W Q2>;H:kxHHq߯z%qDZlI=FdGf?j]g| V\"Ml^l1g솩SϧNΟm;68-k}Nhs:F74=F==s40r0 vBfO{xo|5xs{3.;[,v]NO |aG Qڮzg=Lv#u: k_0|5̍KqIq_lxooΣG iںk*R[Z <)U)CϽQrkPȮ=v6(h`ʽ˲a0i3Ze1(:+$?p&lʮ_n$sM8Pz8_=oxt0X_r:43*a&*"p#с[F:X:qEXm;9Rʓj/AyeqIAmϓ8խl@S}:N_ǮuheaQ|L (MjK'HFd2ib̺,!8e05tƖVz>5D5(,+m| Vq uDCĔE-PبIJ "|HY֢2_4WDc5JczJ)!))bA,䮨>+\ W{^~l0_^D| 玁SFH8>~onNYE$9=YILYՋ5Q`R k)Hᶧѥ`A tU C"Wf ^+g}t\^7ճ]zbwR4JBdxta cQ),e?>~V[j<\K2nriև샥'т*tAE84XXF%/. 3,ǮJ~L~~vo\zH/gI. H'(}6>ղ]%}V]t#Wz|qRR~mV.9 F`\X0M N6ĵ)d,vf*";RςcJّ-FkI/_)@/Bǰj9|2Y"] ׊mɋ*HGPL(lPA  A - :E=Ao&6a ˱sPl=*s"$QFHsފ e ))@%[{A(PjntKig)h0𐌤Jrdn, s \ L .̀AB(<ecXb6YAMgjjWkTrsiAE"i+B;ؒ ہ==νgȽg*ݳ Wk~E fň*I!2d H|ϩ P@#`1i9C`&1EڡżҠsD F h^Ɂ~bmElKޅfNBTzUcCvtkW!gZyIy#JGW9(d*l h{I3-6iQ69z r `%bA61R$K^Ԫ'ji'zҒx$ɉ^8l16(dH0&3kJƺ6gURNܳϪձFfw7K\B!k'lBUF1Ee)Gς.-< RgCJ=ճ./XMj(r_\o-mmAV($2:v+4Ga,Q9¼ B38?ё%x Eh ]:RQ(}q.(N^;KC 9KI=\ur2PWR4 =ԴP2Li\>O؊DGo[>G߽ywٻhi!cg]]xedRü=!ҧӇ*ڴDk64\F*E{QJ&lW¡$* eOpR!k?m>^PAì@XmeL?t*.Jl'Nʛ BIn3#z[tnU :iIy:>Vs4iL fx9qBP$(/|fS"{fζjl'xQS!sߎ,k7ťeEݒ9_[StG+>fHL?^H׊.h DQ oA99W1)!1y\o+_7?MNZ[3J֛fkR57<9FͼWh^媱*HB>}+(:*#| r D暠kPɖh{Epqu":^5,FI2! Z[Lq%HH43DY& qsijr^;:1+p-D/8us;'#ZGEtfy\{/9Ve-n^{Q4nWDeg`ۤhtN*u(Fڀg >e;^;dl^] ؙJ ;E󌄚i )$W`߽.!8WsB^7X*ŅG<& &2p!k%bu5mCHgB@fHbvȭ* !%ĐѤ't;H ^1̹S% )ES:=I(;&: .sWU?F9aw?tٟy'S5<&oM;M*D5vϋ,|/o'~M~^FVAlёAH2x;HS[R=%G)Md< G)1#axЧ֩p?}Dk{m>_˅??~pUN[Qj-ZX<>ݕ5kdi>z0^<"kkO탗ӓV5qrfn7s ]vH4G^NېW~K0ֱH׹+u-#:1t.,oh-"([|<:jpzT vCvubU' ) +B_2<(lx&%DcRM8d OB^Cw޽;.Wݛ^ҍcYW;oo;m@[K/XZ[,--lT5O=yuo3W/ܖ|;򏗣z8%^.[4a AZ|Md~6 '8ٻ6r*y>!Av`T ?O#9Tw_Plٱ~PVđ6E@Ry&meRyiBbvd;>XwUyym9ngWdz~GMIƠ6XВlyH&I ՙ,BJ ~R `ng$u6m<펛v#P Xc}&JpPT;mG֪Xcm*Iuꌊv۞x#\Q4<~FaDpG42S:vȤFAu&:5|F9oG5R@DPJfԆsA*Wmjֱr "e\h8]_, 0T^_iqanf3Y-O?.-~,ס ryKᗾ[^siys=?P!wI_ʲf-n}v/!Rm!v66Y=ݗysч5{ɯx\^PVrX/t/ Va~szwwW?!{^aɏ{.GXxwI?lZe]ytg҉=҉{W֡e7#)xT0g}["u ?$8 f^"BeOQڹ/W 6uzfl| Rp;R~z}Hj~}9EZAEMy&;G[TsմW5׌ud\Rc}#t@ij׊U"G1}hh<{vݰUYڴ,mk/K/wN| a&fzs7O \A3gzڷ}!8,}>,S.eWS-Y?ݺ]ZS;kOfN9)Yd=ixfŚ╕b';։^~꘶lCW\>~ɥW_Bv=l\բ/!oj>&+7&+,>Yb8]Y}X^,.N,cb;׆Nv~}b/^[\my@ոP3p1Y[U,ɪ2Yq`wB g݄N&u,Z=W>v<ŃS͠tp:8}z9n[,*NpD 2\l`%)EQY&eB'3!ٔRDur@]cP# (ĺhڦ4f06#IkU3sMC'v PwSx :sbS-hbg{Y\Kt i@Wh f Mu"|u</S{۠q,/5)?ت< -:L~2>< 菍}X'GdJ4D%: 5qZ@`c` d| FcD)jHV2,|ҩ x] jz(u_E˧v1,]91N퓱^׳3^@WT͎=DJmM%oSP";jlY#ۤvkΕY1=%ܞ50X2Պї3Qkm2F)gvƥWLցo-ub?n q6sv9WU^joj $V(R 5XۤK'ALEy -alCn d_%S*`(0czbu.b#&dd5au>&&QWm%Mx1%'Ṅ9WUx{~J6_tJjұIǞUbVy@X4:T6Ẻ/JI&@o[aNCǶL6iڤiԴְ(EǩCqsa5vլLJ)VjCڡ3nau[^F~jIA J a@P9q5oZ:eM 5GS~, mg;꘿d-xPٻ؀I#D%eGl= P iX6'UC뫤.V1蔵":dE,ddНS90kmoofWnRsͫh;k\%,_PM819srтS TWteR@sU:PUWd,^R4 E4L!!G #rATκM~ czIZ$qD-gPU[q5jY58F?9)}ܣ)6oPyaXmUV|c*'M5&}a`GKKVk5@ɱڬ}`*иZ빺މ,.?n%[ݒtfss! Zl$~;}Vk:DL8og=@RފBCD`Hy!O=R2ܣ.-ozXicгs,_oċN 5YR8DẀgGmbGL3F21,hbX0.ڥM) Ќ3^@JbVbڠKCЀ&;MK&`\DUM$%EQ \s`̜c$ǘpo4vx;^ڣu*')!~|f&s9׋K\)&%(R"K@:-aPvoy22tA1v!DH%IXj{T5؛Π85Ex~ssW=wz9WLy"ˀʄ'g}ss%c4S\فaR#]DGQjKLp*6B(+y)BLy_ca1&*$5}Y|<{Dݸb+Q1%p!Jy7O s`+#B+k_H#nv"/]S_uv|*Ihc)BL'+HǥJ[uU5wXJ= c.P-Z3D ŠVAQ|)FR(Ѝb(Uƃ,mWPP>(Rj U*œs\tֳngK{Lj<`viFBkA)ZMC!EIUvxcU=U[+C}>O( | zJv8Q1Ic'kZVMq'"[ WL NT&KY)Q%`HG P Gñ(wn2`۞8rfֱǀc7!~|,av\ &n*.*&Tc KRNsΓˈlehEf&u-ruIE{zS\)`U%7Ωf_%V][oG+b=RW_! s/ն2)+bV/dQȑHYcؐ53t嫞ץY3<:$'ool1$#$oaY64 .N7DϓndLZFGz?i >_1W 5UjhRMNri*N(9Ij ڳ;{FMd4`ڤd4UZChf4RՅԿ#VUwe9_b B@іA!N.@Wwo^AY(A]?!B Crɦ .1hP*v0{F4%:Rpq4:_ >WR]O4!O #5<3Unnz>d|ש}-=Ix0 |x!Av_C_R>g򻇮V`>5sBzTr8is"ʦ 1f@q?5Z Nl%ˋ`-l5)"֮uuT+R9M{R.h F`>'Q|I8dɇl= P@52 %tMlՕ8KCvs>:;MK,*dMF*:Řl0$-AK>eQB,QR9niIqc@MXi6dUDt ؜!I4;%{ue]!,_aFAB+Ea _@h+B _1'a k,&|IC@3wXmhM$q` Q XJt%%PL7k 谤ڮZh> EA$g2Br01B wD2!ڥG~MOn/*UܳoLj|,ߢ+ΧƘw.z3x4> 6ugI Cfhi)}ߋ~;|2Ǭ{E «F":0uWZYCJ}6wCG~6Nw׾1l΢=|S6''WuA۾68h?H ڂ=VfWTI}Mn`xeqtfL3;{bO[ݍoWsKhrK da >Rigbcq;@dMRy5Sу+JPb*0̵t%~⽣u 1U-[oaɳ7e?J>F'&>2pRJ& H[Xۤ9`(Od *LA&H#ۍA N,:'n7==J ]Y Pq5XՙlX%;sۺprq}>:]JfhyVY''m9J)>9'h19 $xPKk:Qk {Ci+ Rݘ;6.gJQ1ZnPXl4V] gțC%{,D 22' XARG +T dEm,p4!KQ|H8%FIk E0`4d :gbċSk`~.5n iG-X8i{o4O3RqÊ-dEMȀxЁLs  a;ۍWl?{v}vEm%k{gTL7PL|7oBmƿ\S>?qtq ӈϕvW?|Z=ݤ4K9hvJfb]wCE_m>dz=˧@jqe|. ]z3Z}t }:ȋay=_sW9a>?o5Ofg~n7o0t8fry1O `O? !"'v{[1рqp=XZ NW~KY_0ek؜8`A& [T&2,n )ZAǁl+w|&QLJxVkR9X8,eb(I29eTZ0idT`+-m ZR@E,X8aWRáCJ{wΉ駙_>v;C]#t[M^>ej{nd(%Y"ldv u9VBT1k۩'D[HޤĂIbIiU$uifg⬷;qW`Ѥ*Y\{a;v+=-!(:,WmsvDe}$cѨC)CƨHn瞼5;7[Ү,C|g5/Č1$2Z"1EolBaRNЂq3NyJ:Eq,lLq ׉@Jv_SO6^Ǩ5 +R&/eK`*+ 2>q (Iԭ!mojksB1(%o4Y S'vJWI+(M1wg\]!}UW=9|OO,֛MuVŧ+T.wopRmN}f4gիˋARi#ӳ<]RBJ$׸G#@N”h[$ yt.9tcϮv7sȖ4kUI>BHBP T;Ok*B|v~oׇ~DI])UJwu"6 O>/j'6^?]o[9W2ẹx٤{i`0k`YŲG =]94msTN7dר]s0/GK2D$/ipEmlp6?uS)N]-a\E] .d{vP+TIFbB W.bK=ҍ <6w TnO>K7;ذ'a$ L@iVh+6%n&FcyZE-] AQOЌ? ԃq1sx93*Yq,'dMFzޛ,wg ˟IL&Vqͅ.RYJ6S@*i`YcL{V\;mh5r8^BaOo)m8"go$Z.ozRꙃHgXŷ2zkAРyN(Y<$ mvFHg}FQVr2Fv('k|xڭ=|4?!q[>W^ze3?[1=^}$ RC[8AjL8UTJ@R0))c$P=T=PZ&d6ZCPy^{ "BpcВ+Ǭ Ǿ?rj\O$sAVn9%Cq+py| 4FJ*k8ZIEuZIHmk%*Y_+9JyꜺ.FdI:k8x☋$Iibs 5Bk!h-$hzoc1/oXb,(nZ gYi5ˎwҒz'Z:%j RyoxDe,R'Ҟp5rv̆1O&ߚ8\Cc7)οmwi6u|[zҷ,}"حim>n[nwݻ>;5}5OK>ۙȍMˍYbmL_ 7bv;YakdzsNmdͫt- $-[ceZnRΨͷ9wKBn鸭s4zC%k.hk@.ySIljQjSk[8GRpFW:<^s+Xm)$ iH)M[c^=ؽÊ8rLʯ.ÇTUmڇK(Pa(g2;~0-?(t|QVzb]L4o]^ZCmyТfX~qxnt\nK\̠rY*0k^nRn,Tt4X3٨EIp%G)L{&! xv\5l\IIOy˟n% EniV;I?wgza|^.7K f/'W,^B;xL4|yc`]wNJu-6l.w'n[Cn]>;$-wt\mqn!sɳ, \=tc{֕}y 8 ?"v6gZ﵅U㶃3lwkRs6i=8rv[tqkީ 6BD tң}Wr=4?lxV΃ʓAQIg ]8)|)x&rσ3R7\T %72ZToYt=ڮF!}WMڹP쯻^jc!M֎B.ǒP+ءg(TB9fPJzL,ѨBcQWZ+]]!^]=Cu* &*䪣QWZ]]!9+ʊ#RWFS*ѨB+MCWWJ{u Օ`AZRG JBj@]*yo >u%v|emv?FHnpee\6?^Zh3N۫N8 |)K*A^`r67#|90B Kn^|!Ѯ2[kyFVⱍۑ qUoI=,5-nE}P͏I]=ujE]j8tuUWWP]q]ٗWAw8|oO?˖9$arrԺc9>yf0Et !:+vQ[E`:1/rn,xg] G "??X.>q*͏?dɼ)l !9Gre`ͅhІQV0Q %9`"Q)=EPN8άh~wZq2.u51$sIS}vZfKw^;fl V++)γ(ދ&'W>o3?@ʂ׉x$Qk2(}2ӾH_v@ˎTp4ˀ_-$2fE5k&@pŤguc-#s$Og41v했\ D.fm*]{+j+GnT5JIbˍQTL5n"iX(μc",Jq$depjM&'Ǵ\% UUjlqRy$w|E/6٦W+HM:░:,Υ ͹4\jQ҅Jsgx.-V?2Hz#.pAp5I,MI-c)^rI8G.=MZc sΑ;Ɠ%%E)9&K1Ήr;jT鎌hvBs %JuMynIZ[*Tv\y(A/c H8;v(QCFJG3;5I[zG؄4MoA.pH}7N;,^Jc\a.j(pG ,|k8D-Yd8A#Z p'd{uq(uZ)etp1nvݖ^gKiφUޒ-\vigYc"¥j@ļs6QX>i%[5Q!D3w[[VP&ܳt6;MkbӠ8ifخ|_P@o"7z{uzw˸cug+;%ꕝZN5>[x~Ŕ/*1d55Q0mAM;m=z$H%% .?%'Tlb2 Z\%+c֜]rGנ;!ͥG=) 8xʀIc$O:Ya4Q:e'g\U:qRRpPbEr^E_w\5aS|0ms:n+*۬5ɝiĘ QZF! l7=ٝv6lozv%{ҹ#7ZX^?>qzMVkEeC 0Jp''Wx/s'>f }s֫xT,b7SΌ IU;N}V9>*KNjUl Sr>1|@[f.%94.ol(iAwmm-uF­l*ͩbLZRZ^DQE2(qekfݍ}^JH:&h>9"v@G-r']o6lbнpKb p|S>J,fY:L`<29Gɣi9ٴC+N0ZC3A GbWK4/0X2uU 5HJ!"F݆f'{< Enqf2{a} N%_XpI2D>2Yښr ic`2ӻQ})*JL]I9[jǒm1q1>1R0' R]AzIlyVN琓s'S*ioPpJ0Pؐ!T\Tޅ/ūQxuzM?n=7%=VJ^_|OM/6TH.OU+լ.4_.(7%YѪ3Hs6~3;y;.>5-E9d}K&ίg24]yb3ג$:E\zcɻDiJfInuO` Q[sq♣]|Iwg׮U"oZ-ꥆCyc@&K-S4bǯK$|IK>=u&0Hx{s9a=A J͗\oQ>_Vڮ8F/7m<bϊ 3)gRLW1lfY~%i4n`YŲGûBO\ nqVϺxMnzWYſ2R&b' };Mg|V?gGz4(1A5fO>/?T~ӿ?zӯ>Ov? ܤP6Iw g MDK_ר{bBXbK=P@xlĽvZ ]oQoA?G'?[DQ#Rz fH)}]MiwL ?HZf>iOߙp89Ͷ7 c*Ι9\> t]Xj6y^0,˶ {w$'`Poq1)6mܐ%M5e\ kew^4,O {\w ${L2S^GeL bJN!8tw[&Zl !;uf]NzR5lVbsQw0(jڟ8Nx+9N*F߀\vIq$ })x[%l2&F')>9k I$͐D6MoiɚRej~Vף"l꼮:jAP^G 0~V7 #1eo!qHRrrB4N9(AZJG&^nGٯfEv/'EKKKXT]dx@ A˜ BpJA@ت H zn̽F,E-se[}Zܥdr^v& i}jpnrD7+F1}mot^6ZYɌU#UM/GH-=K[T*`F)016R9.Kb}f=TjGj騚E01䞀(Bg [1qV%EB! =x~T\AoB\;}rm#HלPcT"Hu2Q ʞJcBm*ƨ +q-[xz3u52Ve kyѐ-$8¿s8ɻ{-6;_LQFh4jkVr1`TB0IOCK.M ,cY DcJ,1AY&e-gViтKQʾj n{cy wdʬ쌅A7a$l2s%<=ZH&+Pfځ,N9"T#gO?c%Io\RD P ixZ0HJqh\#FȨ5ChA4nؤW8.4  .,8!1mvsVi3]b>Ef5ٹ"E!rpGt5zmq{Tԧc:yV_&Ҕ@_ε:|v%^L^^v=X2H6IWǭy6jm1Edсi‘ :ͣ祰I@pHKArk*f=$Cl%x_LN$7FH"*=Do Z">OZ냶4YOBj|ҕ_L[U+>(ΆR1%ǠQrR8 8D1Dr i! 핬1fG@ɠTO .Er&E͵trTmX=Bܶjq구{] ebmo(>g&n.ҧ@|eu@ie crRvdYr 'ʥ[-!EB66A+_p &,f0)$2 ֮UۏGqŸ֮/ڒz# _1k9rT;ظYQsC, fnX!MU}AB&dYA&C&&HMt]p>f,Qsa5rv֨_*+T#׈[n@ߣ4d,Hca&1`XHa((1ke4Jg|I>L62 ‡ک8٭ puYK^Ee(zN+>J7dJ7r ;MZ]/DT*MotF*>uU}*PkDUrXތ2hIB0*ZSWZ켺"* +d QW`+OF]rduuUWW?R{~1i5?z(̑Ux\uH}Raqc>A\m)`8xpɟFGBdtO{ zya'H_a~41ć8TRN(UuvfT?i@Mqa__>N2sWl! gjeZvwv؜o JP\Xd%pJ^kfE4 ،}&n75|E;B*S: nM̩7>IviԮ8:ókapPXD1 fP):)|<2VGYĠ[5q? z/u%^-zW䴬`mٳǐNRjgc))C1("&*UCaן0a.A"9eir>V}WKxzTT;[+G8.s$NFtUUB)g9JkŏawlkW5_h~Tۚh}b[Q^۽8,n |ZhVŤ|8zRE> axŪIW"¯YNNcoÄ4qɰxxf]1ʶ׫$ۍ C>W1\:h%}̤W+_m6xw`nnC:0^.Wtm+[~y oČc?jόmrIi5vMGkr`G%q_"s2U̞FOn`EN+Tut4JUϊuƛa1tU3;X!ULo4m~\\捙הڞ3Sݹx}ʍᴯӉK'-6]npSV^>/kӉ%k#?v]-9n: A};,od ݛŹݦnnJ]`w#NPj ɻE&*qϬ컸馶v+/]Kp6uaiM:c( H;ĻXkĽaJ׭2]?Y =c׵oB0!%]Ň-.'¯mtRs" g} ̽ _k-{z y ȹnhDGkX@ d2!Ոhptp51%1~4yFV9M=ƽQZ:Q,I pe5:X뒣iJq# )M͉Rmѡd(͊02KYN:F4;X;ΆӒ*sljOn#T|-x.¥^ <8,IXr\Rv,% |z*?-ӿ:HBL^e-Aw6;C:kZ~I;;iμ#7^]~뮕9zޥ02F N̐%i4wd`2!W3R;B. q\oMhb͂_McFmԴKn F %BQ.hbDhgJz!Mv֝Gv1ϧ7z-|c3SkZyMM4x]t*:h@Dcˤ("أ;rjQvF!krj9%ߣbEzW;Vc/mGVԿD~7 Hܱؕ]`N&` R'n&BU?nc`aʝCGZr\2ڡeЛri,D~4R_*>&ŠҺSr8,URr4khn>eeyߊE:4 (kM'6DB-ME 1AxF/U!2봻se \-=@46"8}X tϧN5g!Vh(~s|{na0غGzy#ٔ~T6\8jq7_>kKۇ?EW91P X<LR^`V0lT @Syaީxat6qҒP++Nv #J{11e͙#Q֙ņkφܻq5 M2\/&LY]Nj4w\+g[~|-~-Vn:VxK$?PWup^G>8>MqsdK|]Ӣޙk}io ;5Ye{Іi-4FHa@)Se7iɨu){CErY#TcFx/qI{ST)Ƅ"S%Ol89x!z|Ӊe!gGEQKlʲŮYdYait2vǎwIsF]!h l%.Dշwɻ:߽ZYKbw;N]2 zT]$1^P(,#;#-5F|΢G*\Nul<ϩ;)c$pPphksgĦ(Cd(T-Q!pZs:"<%%C,ٖ|tٖ4?R)%l'IVH"W@R Vi9D 4t\sI_H$.C_@%)XHڀF+JrD9kfJ܊ݕ+B­WqbOF~ǬW ˘Pc[P47Q5hTM)d;+ԆM 8$1#X c!]&RmRE#>QWEcni{ L>EG(M 'B 㕷8aKR0!Z\vQ9?dF9Rϳ8)Mե#<=Y$Kb20Yn)N N~ԟ4A`W-Ej/'fHf8>hdA=D9adDI)v- U%olD 剪ۂrn|+[Wq8KS~ mB+/8ިi{uÃA󣒊FՕ ʜoň?V'T^\LN5ق.|0gKozm\ۆ /&Ÿ#$9'fiO tn6Oú`aAlǗ}z[&X+#:^@K6u9GroÑM_c1LEt?o4|Ki]XoP?.|Gǯ^*?}Wo)G=:~ 8? Ljˆ@C}MF?CצY\Croo}!)/&u(\V ŗ79<,}Lu.e!urf_a#7zOFkٟY5{bs9TP Ax_gVh[tk㠟9PV.bJ2ܣd69mD{Дb\}gg$umɼKY~2ns*p$cPA I+u4;KĦ$QhNmv:lUhN]hy6ȴW&~IKr]-Ͼq0SK1O#BB~^tptp+NvpB~Ȳ5i=2 7 ; eUfGB٥ y< IKZXZF OL3eb9̩wIW7Sz XFs?HUܙ!"OD$t>'eʬvXu̚ì+Kz3sKWo .[ڵ;0P(gjɑ_Q /FJ|ueZfSZRdQԪ"ŢTV&P/QLwTGAĝ!1$hTi6gx\2 ޺=]7[^o/~oL,SYps~Mb&D6ڧrMRAY ɞ_T51J*'eZeA9YxDU':RZӼGo`NFHKh53$V-  ƒN蚭=N/Yd!fj0n7ϼ-r3Is1QQL#8~6/˒dTz" " )T i4ZH:Ge!03_ZG'Z$#q[6c-ZQpcM9lHsƨBBDPE(1xQ8{fXcG *-NO>5nv쬨rzj_>Go9{6lw;Lk곒nu ݪϸrJ{{ 9(ڲnA Pp׵on A?TdE^ftΗ;:}l#uUԒQKiMO=?5yG[-3oswuӘy/MS;:񼨎{3 [H]o"]bۼ~ը*>M|sgo~e͏W*ZSYϥe)ʹ+2e<%_xɓ~OC $߻zjNO9/! /짋CdPTgqwbYӗeiq*h%/Ƽz^o;۝&{DCODoԇ킮Ц0-v*)-?r]U{~\ŝn\Mv3(FX]IQ`Ψ£J$+F:cR-B-.𓁼4eNO}Kaj]OjZbvkDu7A{3g^jˁXʿ'|2'Ü t³,Ǯ.OlXQQ V6 SJzXvRK7-iu-B6Zm[`\]Ѝ.cB1܋- yxa!1,CftydF[>_qy޺qlؑGQ}gb]#EmBz&h=b&;6\\IXYx3nśu-UZ-ľe_mrrxy 6P> ZNog6N~wRb$wb9y;-J>KqtPH(р yc*DL7,D}ڮE!C7MǪQ׿jϡSB4uxs{.S<# L[˙Qtpъ,L M#M;Dx$K9S>cьƂ58cIhиH+{|VVv.ܖ&^NXHԜ#OMm89 I&4*:t .9+oF^uc[i&Ξ?S1Y"v\(+3CZXrZj|eh[Q1@&hK!I_tm nl47)$\ INں ]1#wB2c(†aKʃjl0Oj4& "s\}PQi p$Hh+V/g&)*4R-Z0d^C+*FIG-*eiVmk=g b;rqBT+[,>J ]):،F Tkdl&1,$XL3xքgyq:9Ӷgٟ2uT^uff˻ dV8w:rB\_璝e\$&! .Vş]O%W1J%+l*cѱVm!w;_Y{"Ld]ϭٍf[XPveCZv߄xA&NnD^s"&,r@QrBQA+Pz9lS<,`lHс!3褲%̓Yd8FхDNu!4fÕSIQq,L?EDń"F!%i}Ge/%䐔(eB-s1:R}CSD$ 7VoA+ዱBղTِ}Z]JIKmdL5g7"vQG&ޘ:iɡ(㢜pqAD"lpaD'KJG)X;LYCv.GP\~B[D6ܶ*0{|Se%r񷭽>zn/N2/8|^,nMfsF 0!UVah Ic>}ʞ&;5|UMckCLMJ3274L0}'TH`{1oVUR >!\I͵8':q.pUzpUtrWF{IA룗 a~ss{Ͽ|锳I"߿}+)=k;F.'l 'Cr0OA(RwQD_+4 ܉DNN?3kݾ*r-r#Y9k ̦XR,AJt̡]RSJcesz|N] Ygv"A(^a^+l%"EZazǃ2LREB+d OZGj@ED'7r >0& M3UZNjglk\S}݄(oy,R xu !PW|򊍸^c2Pf{LC7UlWq\Z2hFWU`EI`YDDt*$ UDUGJUμH=zk+u2JD"^RF!Qha M& qmaG @!|] L'< Ŭ[%)0/[EH2jy[*LeSz.Aҏ^qH1FT+iBk,]kU(An6Kъ X78IJy❷y>+ ]w]$h˺*/nC%B]^/$R9^Gjͨj5[scaoevgs!drkn 5أ策oswuӘRC?3Ϯcol,rY/F=PQ574ݷnj>yOJ&$)}V䛟7|˚/וŭU.KR,i Vd&/S?:} `Xͩ6Zfאflvkĕm-bHvq~X?&Y]!aq1/[/,d5ݿCe~Ќ.,p,I ,B|l,uذ#t^_3G^ۄ\L^1n{L>vFmkk#9e硡nY?} EUVlfF D+"|y"+ev&fyt[|5J׌u#:׶k6~ؿ|N( ;O''b@.>UpzxvR"""`3&ot'Y5XBT 6.(۵ YbUtWFv-G^1wqJ;¦U;\ShI_^/joћikW|LD IBPCT 5'Ybtaztjȭޝ"hY_'l}YV``Q%`,@G9 3YPG9" @QCCQR5"l0SlFC!f5%4 S;n< Fb&a8=9:72^]=] i7JI\Ѻ`^➒vjue0>YX1E[L>bY\TbDsUXqJqYQWYQ>(Ĭu"ٌ\ND-/l kβҍ 3ʿ $)%ŐC16hAALMچ,@;W%?5RCrY:?9[r,*ALk"gфs*T`)h E#<(@Er5gav9풌%*$bJL6[Wee$q9bM (c;$cgL->"WYC=[Z`|p~C3`5[רu;?ҲQ 8Έ{3,҆,`vzvؼ_u qpNӐO8;ݱ=Y}6 >q#[U9-"@6Yq{4jRA5:d&r"+Z](Xb 9KCa2AnlPF|K!R7ۻ'yٔUS=i!>s甜Oǵ[NZo 5x8 9s8OH@Ģ%h+ ibEHB :3B"aAgs@/n؅n|+5;3}M}쪕Ns0 <[je(@,c,r`R[b㳳}7y BHb7mD0QԘMI\VMZKUTo؍ \ang|V.ܼMȪECkԮNn8gv9EuN>οs]}5 4*U[#U\8[2: K}9cK&n^*Fٜ5MmZȻFHXBNaEvQWE{OsE;L5sWv7Rk_Ck/7jx-".uQ*NPB Fℹ@[ЊUV!Q P;r]SX7lGQr R(CJa7rևs~#)]шc[;kD=jQ#ޛ"HF8@^l U+Y"L6dTPw*ĒQX?F(FBo଎mKp6=K&tE>e$Oڸ`82zu#aSgыes]uvmͨGx/ zKR6$?kՀU1ڻPz,Gzqga/Vn@=f쿉mjZ}Ipc e?g?&SϪ荵I-sЃhTMt\ /ӳJx;7N9NVU*z6(F &`γr͍KT|5qV6HNۇ'*-,Z{P: rQYOd*SX9Jvnl(gs}lèm:ڎ k1AՑlءLDHL͠U(=Ez=OpЬ+%mQ#XSP^PA-:?Oz~s)#S.6"NL&!ͫE)%`rlr18(KӍ'6ummaQWtmI}@Qyը֣G 0Us)mg<9hwFl+|kVd7*W aSՅZIV[pd*d|֢؍OY#r>LlՋ> ღsӣHc )6 1$h\cf',Z~V&&ӹ]Û^O<{nIZh> ^{-/ etunz wUO>2W'RBOmKK5#zH唇 R(w~;_ ΊPy';qx&8Be[LϜ*vphHH:jw>NnJU]jLk 8C2*6YBt`Y7rNܿ5Y揝H.  b0g*\p^N]ϙ o*1$2km)PcS#\fΡ|&+ufm*yqcEb6,%һe7r6tW|CYW]rsBWU(gnʭLj˕JjU&^U+rTh AV-ʃiȜ@Yˁ.fm-Ĵ^*X[4a>k"DL&A%p'Zy306D7~vrt4 a݄=\z-F@Rb]rF(Stb*SMK!T#"?e6r+&R4>&>dUB`a|1YW)W6'rUd xkj)PrFSfM0{ OZ*'آWg=f>*42l179/7ڐ쎸r7̛ɯerEG,Y`Q{~rGOQYMn9-5>q{h)5ީ^8]EQG䄫*Ŭ)9q T÷;ȵ6R!LJ7y#_knz I7iȗ4W"+˟k7oϏO4'lۼڣ% u!v͔:{Fg3.xՓ[Q~vu.^fw+bӿջ$LO?_+M ^OZqJ^x2f2e3/"'s$ofOG==֮j.rݪ9fmrRF 3jN{zrr ?0=7O oӏO/? ΙZ:4<=wyW,{---6XBϧ^ u#YKsgV&w.@־|O3j驗~ 󝟶jG|WS2)_Ȅr-TU Y#q4FlGT^_mS^1ezg 'Zcrb޵ƕٿЧ vW!3&`I>˒-;[ԒnoK4`ɾ.K"uUS>HS&DmEQLPJ)Cn>ӜW7w9PBigr`cq!ZCF"bI8SdUڦN/:JF[x!w9I]ëZa K#<*~Yx@eNN/17Tc wӗ^?p2dT;!N8K?]of {7KE>`YM>?$kAZyj4Wq1 v%2œUl: 36xݿ.'@11|a(9N}o΋ޏY]v6=bAa8ņ9ݵtShV/S,^hȟ<`BM?6GaC7VP {c#gu!u>ɿ=$>_w_)>b/5ZF":5v&ۅHcaȘv$|nG]-\K1nײ=8q~>>p˓ms Bi\˚n.;hLIlG>=.(c]t&!`(t7X׷ W?M:3{I|Nj ˼ʰ$@2R$m] :ƬBkõq/f-h=ݽ_{9\? 6s5*(lф?TshV=ZT x xચ[:\U+UW ԱUZimWW0#+6W\ WZ-WJk`Y9;lF \Ukj ^ \-_}#$P o;<͓~\Zn+0)3p g׿h|y=+J4? #N;+cz0XYFiwM _yZ w>S]^0P>yJgjOjaNmߘ#H[k9J-$jfTsm90|<ͣs0ߺ ŸWI 2!vEe- LWFAOzɠK%r0a9_looyCAśSfc0u腧Ǚa=YacF5qɖK6q&.%dlkw'0ue>%dCG(S;#?D}+?=QfҶ41Qv, J^dSc`5WY#XM1]u;cuHD6Q8Y6Ϻyͳg]EE _Z"J3L{%1ulL֐ &K/,=~>vD#8YO:>[|UN?[O>G| 8T)|WwQ_@/KRh@.Re.H]T4(L{1/ a#",%z2KcdXs:tU(%*F^c³!gB#5RRALԅd,[h*3͜-sy̰%ˋ4ϊl4g]M.n6Ld־_[Txjbfr?u؍IyuV gP]eϭMpȥ=SFbyw:N=F#hR,F];ҧjA$A'u2DQJBC9)NH)gL &fBMO EKcu([ bl^gI$|a'VfΖc_~NTZbe`j üѢz&˳,Mէ+ N-z,GP_lEC~r|X = y^hF+%YcA%$Ђ%U^;Yf!7ʞAC^k ]o큑G6 Trfg_AUTDq}L;D6Em6)o>A?Ŋ1-8T2ɨG=3l"g{#{K}xBV7h/ƅq.9PĤ (r)^dҲ1Ej0?$ m"f4.gד=4T r9hC=p5>2jg{6i8v] /"\2.C+'n2ʮ2EcZ4‡D1"Q'JbٽdUU JPEZU㉽66iuW<1K<=h*'(OJS|=A.qM6>JD\ zASy JFQ>4w͚31ddH}eh%dS@|?ڒ{x!hQZ%( (Gdy8oE tNԷgח9j~?zyPe!FI?%kU^Qy#EOUPMYɊЙ{HH?ǁZ,o]!*CIjIГOb&_.VG) 72fQcqްqnwʌW Rir3IU?Y._lrآ^@(r!X \7GfrI t_1kdًFmG#JCS #d0aa5< A\v/ʋeT_ԒTR:%% RfV<rqw휤!iHFdO=˜]NOڝs)=;D3]f*b+j+ ,-#tHr"bn )BkuќGBT<9"c1CVEWlXՈRG(;K\Z9x8͜x-o`\ "NC&#""GXɘ3 eޫIsإnխs19N~JTDLf6V R q>pR%hdRU Ke9 [] oԶ9KE⊋xX~=HiybuP˹b k \C[*lQ5ʊYq(xx2#@ +YmܻE<& n~|( ݏZ??F $ch==ka$/xϮt_%?5ew'N-qt\駙g}ۿ~?7?Jwzaq,:oO9*Z qsm;}w_OMv`; W58,hKo<}Ubv#,mnYƑe-6\ݻW~Kb3?nnn٣+\\_wGoGWgy|nwd~6-?y˳[S_6\>;쟇C;NߩMҬJ2l']`-4UnKV!Ьr)S*N[d"0Vrmg-'jkUsƫxC%zɨ!pPB aL3} j"R`e7wmh$?Yž;UA0i4"KJ59UЭL1NV#w'EgϵasIdP}D΢ {`$ۅ 6HPs!=ǘ#[䇱W EV$IUzgy`m%hL'w b)=Y#j|P jg VbL%(k __pcn٥Pyd#V>p g^|1ɓk<61K4ֵRyT^uZ'}FOB&21&cw+З朅32]FgR4VZ3'r8Px?~K>lޤ%!&gTԏƼbPc%#h"b]fLI-\VM!ѕ. qUg_6whu -%T+K5MEpl3DdlQ <[}#|w.l/@ӗ%m!a{޸1t/tr}uB W3ScZʦ&x6)lf҄!*)qg/Ķ붭w z{ћ/.OY;/InI9iwJyG^<y8Lo:ͻH.jFУwmmAGo^х]72磣_m?f]/'oNvpa?yptp:&[]eb'n\7Ǔ7[Cn;i=另ҕR=BďF޻)=y} TJ;oe4jJIO;ib_5  .c(&/Q bQt޴XkЍYZHc;#6(h{?lAŲi_,3L<&_ӅF~#7#ojɎ鿻Ͼ>6}Js樵f d(YJj]+sm;z^gw5M=^s]'oʹ2yfשԓJ%-GFA4EɝXCN)KcR;A)Ms;`Z NQ4?N3SeJ+x#-s?u/M}kˋWa"WU/FU_+j#I(B(UJBL:!ԡBD(>FpM@H)j)eCMT_0)]*J ̈TTٽN NO/s_l|}Cs_S\ v+*KJ+m tvz|g7|$}IS.M`v nzh PFn.ÂyªkdBF׆ě8O*R;@١U#ulRm%/Rݵ r/9lEei^=rxv"N04cX0z3F >ܢ QyglOc娠NUPqFN.n%v D>wkgoaUZ,Y*jYʾE7z%jp^ͣhSI [U1i*2FZ/ eKҺ4*U[O[&"D(n 5&i\eN`aZQ֢..3!rurUjzM>>>}s+G;&8J$bUUΤ=[%NJ +TUX8wl]u{>rSn1$ vWYs#v9=iwJ̥48dV^<,-#tHr"bn )BkuќGyKϝP)rE*_!c^ٰF#Qv ֩jE4E|ƥ 48a2"Š+"~Θ91S%wH,8 Q 5"Vt>Y6BT`SSJyﺬVݚNzR,C=Լ$Aiyl),47&hz(WpQ<6 T ^^X&X/|QXA^omJWO?6(/ce*2@w%fqi0`VfyвjvM?Ԇ( RKi Rq% Z(+7TL8B 9f$SшA.f`!h`aVgA5 @ւE \ PHT6C 40si1C.Q 3), qT@ d.HO,k͛eG9Fk@Qެ y|aÛs< pA˽_b_m玁J qY$U`Ȩ f5z 6R*|du'o^Iz4lob ۿ2W6%d}L Cvs\#j˸wyrXWWލ[pI^hT,3Eu;>4oj"43U38Ї^OΧu>At&h$/c޳6Lk6BRș* @28h@*Pvh(Xj#j q\Ҟ-U1%(Tɓ}Z#gY@G$5^|}I7YmSL#4eLԚ4,h-R4i| "-@-\K;#Q]4V۩xys>zEw(zAh"8:m8@ TBq=j&y](1RHp(;E̪"0 }(Mvfb"óϲ|p}7^OrpoHH3D*BUZ47B|AYBaye7"+x4CTRr|(U(Ѣ Y% Vy ,q:h8tº$۰ٗ_#'8yG%LAQ#ƥ)E!x"|2cgam;nҴU/p IbGBc"M2d"2F|Ev11c MѹD*\ A- NN+oAqȔa 1'S\s$oQ9/FrϟOz޿M-üGx"1N =5>nRAWupC):ݜ~޹-~ɋثrrq3#ОtԻgH潯1.=8 1El?]3%98^x (LQ(7ZOHt6$By*\΍nq.N3rɧT~߇6г8C]xۻjT췺J O.T Cx>:[.g?<\2-C?q89Fw0*[S]7jiɇ~~ pI\/2 gӹnn6zzR4y7A,! =7=XeS7Uc76rX'_j"'ۓ`O]еiko5װE\9z·HCnuoe|ɭO+x O>p[5"guza b~YYMTTUN#P>僘\4*!UM{?擸ǔXk=Z$=X?Wa$qC  4h< $ypq[b!04$7_錤wf:b`:` 0I3G!TBF#mRJM$)%R')@$c՚PC FO`9y :;2U/i`)!_I%ֆ3wI!M)K1$JFr*?.R_ic|lׅ[  F/,b!gCZ҈9ʯ< [E?M&6T?߾:*oO>SG?I+{L\$~ aAw?Ө5z8A(#e\-XvjQZݍQjaM?K~Q:}Yioߙ\T`ԛ^깞ֽa=O]~|CnGsRs0~>zz#5;3 q Ό]hap!Z0EE N8S$&3_^}ΐ^ih%쮮D` ܙ!LD$t>֕:+4l4|q*EQK<[{@]b__?2,h 3KbӶhnwXH#Π<,Vjm\K0(y .(a}i :V?āpKɞc.v똿:@$_JX= Aznl $ TR$2AH%Q9/SR9$kCECZ;by!h#Z\.)8;FDo"t)E>}; Z)b<=W.HeWw+,XW Br:#(>z]38 E48e4\FE^x-HE`>DtP$LH]Ou4A+"XXy%ji#1HZ zD _qa<?Qhm!q(mD!T}vtotx*ʆr<" d_¨ꄔhū… l':A(Ta^ BRKN/ƜDg![0(FrvZkXЯ$\}FVf9.#ԾqU0pp&\|& T,>Sitxh[J[Ps=?tFJW܍yi0QwWmnhs/;F&t՜5gßU6DVC/>?jS [g?m.؁\y+Z)  /i<&;K}7h?}==\o&Dw_d"7Qm ^ÜD%z()P_qqIXoCRIjJ~MtC:WP\x4@Įz\)\egtWM-jQA]q$3׆ ʩx 'ycX'l\lRbg0[:.Ϛfi .LD×ŗ͖?;^ykw|냗 $P ` 7e^FSh;TŜ1o&'7eNl'Wl>_߹fK{-P|SZg-,zgj{ȑ_ewq6_o!,nf`_m]d#I|,Vi`:0"Ͻ0"!&D[ʴSZ>]ں516R[{e^xXuėJRë>ծڥuD!&=dWcIUCf)5C~!UX,$WYZ*K7 WF* Uc,*K)dW_ \-_=ZTv/X>z˛$*fv- Yш0k3?_y]o<ΊMwhADkr" ݷwhR8L?N`#_%yV־'-/ӏRJ 0:Sc#,GWYg@ז :\e)YW"\1ȂMyC%eK>_?FHh H ` (Q8s<exόvFԝkFTDHڡ)eLEtA!Z[gs=? EG(M  'B UΖ!KQG]uwF5j_zf]g=$??DOߚ6n༘W8˻E/?]a71\j FxhIC!'y NFC \('䚜\(9N& nO&S80EWLZqzBM'''o+ܨǭϫ$YⰼD0G6г8BCӏwmK)Ku[U_Bggӫŏ *r!<>?[&vMp.*}ٵq׉K\>F@}-*ݛ'"kUɻeEbBg%Ka0 nl6ziRo>8| 44buKWMͰfV,*{!G%>Q̫h8zrf?]9EcodSM}В[CXϑܰ@hs7^SzB{)~P)*J%į__: ߿!|ޟSwop4$O' 2۟Ѵiilnkآi.u5[ .7puni|-'z?|y3 '>N|nD Wa+(6Ny-_UQJ{R<_B4b&siyWBgW Ns~f7n$qC  4h< $ypy[b!04$v}Ƽ>ÁcQI9 AǠ2i2Vh"I)(Iw6'"6%D]t:Ttb4,ma!ozsMȆ&:].We-m/Y̨|7x-.)To>TC6-6!\ "9?-uʴ2»p h,7Yw bN=AУ(Q:1hg"ht{R V3!zrBRLaJZbR9 &N &ڀDO_8[#zveMcǎ^ 9%N`֥LH?G?,luaԂW׸W"_?yg'oE?)^ƷOX˽ *!.Nni|$iJ[_@~t!zb(%dgz$Μ'ߒ :qWUd ?(/By}qIsіv">[5C|lkl]}ٻ޸n$W"y v5i7x)Z+-oڎ3bܒՖԢE!ssXN>޷ ذBu<^nZj?^tdz{||:-o0|ag>]r^F{skl>݈~zᤲsƙ!/#..O?[ע& (~(C+ˠg/8Hc{.+zZ)G-(Ax?ETcA1 Kt2."AyT^H^$( >.ݫuYo MƘR/g # %dmׅ$|}NE|68Vaį3 4TF~XcG87:Bg'a6*I'<4Z}3o޺ks^/)>m1d(+,)>eQXL"5T DR kÆSm؜5ͤ;cv^En465_ d)-E%6<]h'Z%jEO5H";No&K;7P0!t^fXacXBQTu&)sP dBp{B1٧I!X$\SQAGmr䡉Bdc%g%VbVCȐM){ [+dwy[5-dPO>R:XTy:,W[ˍ}*kO$'GQ(Fkj0 8>qુIe"c1 f#ؙ,"2%bT1+G8(Қf:*cUVl{7&X:6`WTndy)E0 r6`(VƜˈl)%`Z5S}qˢ'Z zI*d_UÆ"sh% 2y22[)F@DG;8cܰiqk)ct>k ! /O$eHuAB/6yb+AѨf¯$6Lj-w1&|dq,hȣw`ƄW&@o65b͊3n>ŬRῗDh0eК/ض^c<͇bM۶˩Z]zvW=Bˣ[>si^z|[ |\>2gZD/8^w rl~}^J]~Ppgُ-f;Ca#O8bru 4zwSiwɥr{s. ѕ[?/6pь# ǺxhR9NdYDS:_.f /2^i#@o 5'XՋezȫy¯7'~}FJ=uz7i_iE4wg)gtv ͒ջ O ;[Lلد^/C߭*ް( u;1.F l}WYЋcߗ/yZ[׾5+<.avxxܴ8բ:dX^xZ||:-o0|a sp~0 | ?ˁqt6nDˋ?K}BpUjdઊkS*sWUJ;\=CB +x{j=tJTJUҨWt+ UZPWUʭ|@WxáǗB:%?V[  by"$dz7&VOK~Q, 9~&p>sNiLlM+B%~|7wz0/ox={n0a3<{Gj!Xٚ*61 w'_ǵa>ݭJ?W7pAUlF  G& ?F*ǽN Q_SquyL(9)cfuοw:N;twuοw:N;8tοw:Nοw:N;|C-/-/nQlο[*vοw:N;ݥK%٧w+w:N;tοw:7(wοw:N;tοw:N6Z=^U`TOgA{ީ'K ˳Je {y 'Bu-hztFͻO# @MY`dOc$rsoD{M m/uez2hdzhbF $ [TP"m09&r"c:pS2(:acҀ 㰚R"kM9ƨ5ͤ;Rbv^E6W6PY%!91,lȣY)y kx?1 #U5N)Ҁzя'yŹ%wmMqH~=Xԥ#u'v{b{7eR)CAW:Uť.>pJJRnu~Z7̾l}JjOgg:4x?nngMgUIeLBdM'jܥt6J$`uYJ@Oc_'23ޕ_iA/mQޞ]~I= t=kO[&*]y8#OhT2޵>K͊ߪecebv0A?Fr4|8[viho|2;#Ċ+[2pwKǫQ+qޭlfyJKy|`ź.=jst<:[ӹy;[Ub}NjuU_V]:gy#aדI]|N^r_mw[D^S9?F4:=> MVMs{7[4eEFvoOpW(].ii?ޏ4UoE_l*p==Z|f5?LTURTwzS=^j~ПwW wg}[rgm #Ik#Cb*ub)h)V"󶢨I(6b7F ۽W!v;v3xҐȃB>,J!g%$D,ŒpZ; :Muv*vj9Jbhywy-pb]Ϫ`gm1w^G;'^n 6[?p7nߎ&/,:b;0gbZ2=ӺOmқU6]z \+f 4TO$/,*ʬӥH0E$tYՆkݞqf-h}CkyZ{3ܯVYw 2b3Q=M]6Kth x9@ם;:Q'$Є׽ZʒyjJaCpl3@E됪NlXH5ĤC$ B(Fb5I TY'ti68Oo.ŋ.x7y!h=x>2#j'6I|bܽybs1Ѓb7s\?ӏ,msrwꢉylXw)n"w" o%aF0sTޠ ^Dm3Ӣ)h*QRP $UyeHRH`* P"ե1l&f*8W1QKiz/yC6ݧyEB :->_iWi,Ίx.QqjvdV~ /Kӳ|Ua76|P|޾ӳN\T9w[ Vi=1 tm뫮~}RMNV\}knVz}E:,a+6?{뿣C]=ēQvod_,Rəy$A\ ZIX\o|͔኱.:J;T.=2 £ Bi< $))^*Xs; ՝_x@JlHʦ.}HDd\.$ƫ 9rje!*;%LIK_3q}܍WuL>n[CbؼZ]9o`M_Wg !&#z'ƕ"tDT!RHт$A,e+i<'Cg% ڐG3X,ɚ%8faTB9)!-(Q"9Myd`ӈ*4AJc:uό/^뽴ѠʔkFefz6QYEe@'(Ah#Q0h}3\;^+tv[т8T2Io"E-np⳽R  r>]\h|,S4NH#|H. ud){LF\3xJA)jFS5& >r!Q|U&V bŘ(QBP"痕IHkv^$ "d!c ӜQ[ Oj,RdC1oS4g73MPPdׂϭ\|Q$$?ysJcV}&f玥aa'h$ 9٠|MheP3{ 0 %*+"!y4$xxK~=W(тR $~| (ОG[)e (!eF ͿQ*ɔ 8YoE tNڲk%fE͟'=gZ'7삏qy+>;ħ]oOg6E͇@>,ׅjmMVd<@B<1̬Wjhtۨ2ֻBTȋ@M#(CON>ypT*|)&LVZ Tkdl&nl͸J3,l63^} py2Tf/L5݃?+ţߏFӳWNzHKyb\br,rހ!$)$[Xҩ)Wc/QiUaS[Akȩ@m'|(ʄl\DL͈'cZ21w͎{r:}ߨfj/ V[Yd"⽌lIGBx $2>\YݎތGEC5>{"O \eq>UA\r 'W(0z2pUT*KkH KKlW\In'W(p2peT*K poPcN 0UNf*K+L*K)uW \i +UX,WYZy [+}BpS9*8Bi9],%+7g*_JM!>T.e%Ǎicq`% //◟Gs/()dY0~ٛQ%d-rWV~e~$-=&~16Oi0{_="Mu/!xM7 ti?,iKA2rLo[%~~@iJ4AK`KQqHᯓ׍tbdTBq=usU-дO2)=v&XZVj<-?a9MR.%,I.3D*pF)@)<7&qOW}fevK60@D5&ya, 4jp(qh' ^*HzEjW9j%J-iopr*@[P$@Y Ayo46yY1[yNnpK[J;oM7n%^L'{<繙W &?ƢKU˛ {1?Z04hAeB aoHNQ@'udn٥Sdl2E>CS65jUG'$:sawܷ9LtNsUF:V7@KhuoǹN?>W};(6-FkR[*8'~U6f ?;߿߽Ç7߿@~x\q~#0M"Pl,sQy6m5͛h6{ i}>-ϺژŒ[+nŸYy4:>@KpjE<Ȅ+u0 j~8MT~ @Gdj}>,pd3?OP׿))EJ,έ*xDO HAh@&$=NkC!ICږXH' -)!Ii͆lC^Yko]pxNt *h!6)c&Txgؔd uӟr3ۛjmNkp_DWГ#G}WiUjK+ W| = eCW*X./qE& gEmM~n~>VFeT*yOqI?l[hSydC.b~-{ۑP}i]ɩx+1-& 'GcZ0YV+D-pɃ0e(/[`]"EiH!)pLLx̩Yu}5:yNNkوV0:z.R b; D锨ă$RpcRV;\ hV^SkwQ#T֗g1ryW]3lnzDv47?5;Llv% +q J_2y YJ'=q`z^^yFZwH-mK,Q'(gjKSĩI4FQ98m57>;Sa,垊HMJɅsa=76p*)cP IxZĜSÔT'ck>q|B:;?4.mx9y3ݝg}uI9:cc[rlhth"SW>'s6`N&JWSմ1WKr+N(=BԒѧ\YrnkFL+C*KP7ƒY`A{Hv,YU/ gq7yƠ}s=ht{R V3!zrBRL4`W2nf` %p bTmAL&ʌx@'[zk>v֗~fc8pRx7fFs|WOGyůEOŎ)nN:mAjb4X|\%ʪV)ˌ!y.7Ӹ^NY#VYH4e"$%VZDA-pIU`F\@D ٧438IaꝣQ1xΒj};*\s6|wOdjoXXb{5Y7KO˦->mOǃ'_9 $“ ! 5Ø V JbPH49R,Tvg4^z6BO<_pDI 4GݴHy`2:ؔ抈H#!h\EàGV'uy_>p!p2rk*sX 1G(EaFSqp<2@Ӟ H q'm^[NʺJ-6 .DڶnX0l/YBd?WhBs-uA9=&PS!^qT$`ӮE"S㎵aЙM:4׋Ԕ)opw7f\v$"Q`.];&;\-s'xQ)xd|5(ݠa0@M{ж2,Sަ7M=`h44'MuO;%:hP~ٌ|)M ϬPɷRAJplPF;s$U1s,'V&d=Ol'N:vɿs͖6Z8ہaQeE=! -՚m),bLI{k TƁbi7@,{Fvya籪lَp |+]?R@7j/g'w+6l*.HeK+uRs_:U.}k9RHrvdijFS{ Ңn3uare^x-HE`>Dڽde9ZXh.P፧:c\k\ "8ODF0eۚ8 ָܥ$& Xdq?=,=nӇ|; 6ٶ2_]&j;i]%00r$端WaBWC5ԜkSlTW_¸ꄔB.Pr`ST9kwbAhbBbb%cN-y1iไ|ݫunz:Q-hb^WY/-AxiTһR( Ht^K!ziA/rDt4 i)% Q[TvWAd~H%^hR1HLb]9BE H)P5(pR9&[gH6 IJ,spݞݳmZyG;|g˵4u7쎪UxG7ӺN76NBnmv (KWLv}ԓ3|o9.g[t~{h>r9nG5nȼ楒x4Rngf1GU=WԳ|GQ|0KjvLyEWٿh_+Z 8/5Me˭_6Rm ˦ ϵ 3JK)A)RX$iVt1-[pͤs-ݱle6kͻ:1#AI#v % 6N(X@"G6YI2'(r080J 5p:A 10ڲݚ8m?37u3yc,cOC;~|}[xp~zZɇ('yi34ԉc4.!G|B\h&6FP9"3N/YTg. M-Jk1tIS7)̣z ג $*p1)Ch2Rן$ 9nb_>+vd=*:4SNq1ʹJ&bI3buKʃj%4eu¥(]YoI+%y 0^6`v !HIkԐ,J4hm[*28c#swcVԻ|^gt.e|-33,g~3f" wy޻~U6@,ٱUp+^[l:om T6h KXOxf䐍YWVZs7Ȯ˴O93sFwT+.o A‘x 4r[Ԓ􋶞Q!aVHLjZ)Q;aK>S*!gS+bjpOxx: Mha3YW/Q$5j0*'z-zdϡJ^8Q %-:Gcu+C#0SG&."I I?+^~JE!!i2*c-ZӹQpz8S SɶLݚz)Q_\U*PI#INQG#f%V"HH\H| ISˈBu3J59ʓ(I\Xۙ]c974sgR%cZ T(k/:-r&=frbg^7@ONOK쬳PĘ[\FfgdUy0{2,Ì6 *rr5TiKwb$ AHQĦ&jpduYCp0hlSsAZ21EjW du-Z{ R]gܓMbrP𔀈[%DR+Uع=מeI;49&gAG'46 ma@e@d! J \8]T"g*[զf(؞_lSrJ{>;M治ޟ}6Om:OizpIQ(0))+S(!5,єdNXZ nAKU6xqܣoѩ-8!(lFύBE-=1UIioKjW_}FS?j=HҊ ,KҾT9%)=t eV$ah21yζܢ|,*Ek8s9GS%)ZLs406&{ L:"3u8`Rm+ :FE)100 xm$kCjY#Qd!BΫ$rqeۥmE3ۿ(/ykfԼ~B8_3R/çO^rG|C=]#HL2Ԋ.h BQ oIrȹtkH̐ ۞y0 }#lr'4+%(MHN]{ΞٟG;?)й\BQo4\@ѽ8tUjStCI.= :0vo~^>5/ߟ+$8`wiA|4O> м]3Ui#_VUk맨{ Qcs1<-Z ˣfŤxuw\SVJ.12x |Kڠ A"3Ә>źCO^@PGTt^Je&Gob)* h#`*b:D,y{D.K m` ꭷAJ: Å8cvjYBnQw9<[ZzI|Q(t@J&/!{efF3+(c ?JV6{xT1 Dt'ڞY^RV49&`J*:k87PKU8R~+ۡ=.1:䯱&q㔍,zH,)2M)2=? /RMki٣D6:!RI H$bO A=&d=#eMڱ49Pg !s%vI;F[A+>2`bE•PjwDUc ψzn.dd1kGۿ->\++ӟ0`xI0o?o ћ\,;A%E&C2\ +.qq.d1Z1{v˸c< ' ]aa5+Y&Ĵ N>],Z|88orRUG/rը `X:.p82"pW榈=˛R";TM}eǃ&9W߾)yw=x}iNr^4Οh&K4)p)AكC/ƛ--1T5W=5Vs+ƽ>\;\PIxY<;-—Z,&?B&6?8nV6+Ew)iDx!GV'O&Sl6Oл>7|ķ6䛰.AY"$f+#NNZ!kB ؖEʌ~d,))C%{a.򶯞e>vv1Vb2*Nl\yPub2/u\SLg To&';_}Ep[`Ueɞ]!=V#{)=vvFM[U"E3>i31.D*] 厅91Sk$IEMNo\S3W}Mr5?./o,u}}+1- јbvW,?6[~41%3z.[e)I"th1tEY(&U6V%9i[?s *lӥHk&$ M\ \J|Y\jj1лᵭS=V &Z^\Tm[tKwʞ7umН;:QOtI { xD7C}g[ Zs |w ^NER(A4 $q{36V);^aA{K֢@:d&BC9)NH+oL]TmzejrhHogEK@ʱhATU{tX5!xil6&G j*/)EE22!Q0L @hJ_&RRY**x(XZlʅ\©oWH8!PC- f|"J<݆S-𖧥/1771׀xq6򌿞+gyAGTfӞݡז#E[4i1-#4PZ- >ݳŎ99ޓz~q oO;#.:U /h[[혌~;XanTXST2mʇnOźvaíBp-C,ֻ.;n_x>u΍gQ4^0o(|$uKqӧ\/ 2jzhYBe58g{dƬrb[,dCB+#k"b.Z'kzUS9FUķnXWE|3:x5䅣ǪV;J?rj)S+k 4O[6ߝMɫ>aIYN3v&*ENk5e';X׫/;۠$KlY`Rۯ2{"y"{RQlJ6Jj-JT9$Pf$m IBHTR H&K8F.)n3qDqyLEblpߡ0yԌcvjXϲbKK*8o 3iIC,Nb&J|W UZ,Ijm^VϲhހuNbI;c:mԵ X8QՉNTuTvV/,@52XLrZ"/fgĔT32X3Alo!#tZfeV[BJ]yu;-&<|H>&J:9p{ɗOo eI  Tz2R**D/D/%^0z!KIY3AfiLNAxc@_3f#o??M+Nq/oK_t ;i6 ,әg_fyw?'_n ?i iF|{; ߟ Ү?Sʵ#|vKf]N0'z8?wG~sbAuu0S: }-o| |S'G^zls}2$,>YN*Z]=YίKop?x$?׭hE_#6NXOf{Q糏? V2 0ϻN?wQZd3ދ.T 8䖓cnɑ7,ۈr<qɲyW5͛ fr*yje-$1rN79 sD$J _$ᵯ'CA&DR @Xu`fl7L{vQ{ [FYJO<#fme\$~R#oiVӷKSWa$C \ʉDV.RD!XLVFVä-(뽌9K& 2!ٜ͎,ƶL=? c-E%6k rLI"-/5vV"&CIXq0䛯i(Э tC]5\i,XRz$$ϨP#(%TX,I1؝4.)^1UV1XY" Q6E2=3#n*lllD66ϩ=k7}:!vNlb[bftzJq@ǝOY@j`v=) vᑣD"GMl!Yc5U6Q{ %`Q1Jneɷ-]%ƨL A! k_F@Nib'`"6h&yƏiL!v}{yᮘ}XٖޝȾtʜ~PůlG]ikq+#]DZ[&W4E:%NFV'2t c2(XѷNN(1䂂dɡK2%D(J UѲQ+B'7S:16@њ(d{^{)WF ddlaK6B*ٟ˖`cjѕ%melCJHQB߭U)d#J0`CK轀ڳk%nC]6|LKqN 6ӓLMJX/O.{AGzdђC"ZPrkr>&#tfҦꏁdf;мR딕Cf: .CONPQ39tҚXZ#c3qkUaa3xţ',ܿ}me*n//\[a闿ٯ/_g, l(@y`*FKdg D.$dKP5ɔI O`k¾P &LuVFl]њ9nmQЮj+ 8fsw E4@Fؤ'!@Z$2ܖlQkgm͆}S<,.ZfҰt5Er*+H#c%ة>xLx;?ǂPDQN8!8f2ֶw}Vtb`Gw+ B`RlE5Z*g0Ɏ )'L`LRZg4g7"zFŪ*͘:iɡ㢚pqŭabϫ$RG`?bĂ JJ %TKll Vq(xx`#@X}o*a-n~|GU>t8T"vuE0J L]ѝA;v o>ANU#JYE)`RJ(@*p J㒉!Rme (ol1ht.Ǣk\*8, ИW7g09]gݍ)mU4x>_|ݝJy =tIۛ嘵 /IJz986SR,`AA%Ea}M2&mgzH_)evv<"/+{ڍn4nÎ!)qEj)E$R.A*#3"ɨul(CErY#TcFx/qI{ST)ƄܢP%O^Ci%V8N|1r(o)js_zY2I9iܳD  EĹEvhJWDRْNEʪH9=iaT@"?nHh7+$)7 싺BjLԭՕyj Lr/*S+ȮLl"G / ? /JE]ejuuԭՕQG(I썺52T*֪Q]M` bF?WІ}*?9V3;gz^)ҠQjwĔN] &w_H%Y&Z9 TōO\ODjqwzT/TPt5V2(8~|<GuрE.<#J=4-ӭJ*,\lc5.q/ ' B9K7o.TBqڦI^|i$FyV ӂ? NFcd&RqX VVFW=w./dTsM'z*Ol3A~A^r'ADJι+YR<P[,62ؐ(A KH!)pYj̩8J3#.ɥY]Rr9re[s2H]euUr_U]WWH%V]}i 푺B){2Ow+u}۫L ՕabvyYux|afKY*={=‡~p$Qsdzj:p.;L+C*KP72; ڛDRMc6wqpH-j0z~~~_&rCo6t*zJ:FvX bxKm.9 ~~On&-.錘 KYݷvar{GUcdYvc5&gZkXU [Yje56^O؃Wӎ\tFw4NsVFuUj+ςTsP8\@[PmcOEV:?{Rd;^*'_շ ߹2;'5\#P>LBez.ӝ{Dw6Bߋ8!HLI{ ׆B<8-"|OzR Bm}'*oSϳvLNt *h!6)c&Txgؔd YMȚNl'^HMVv>׵F୷6f꘭@!v@H`ϐk\{x9&xneSv_fD`"?%[4-z&%0hHks[uȷE&ֲ$at\JJM$*@FLQS:/HABZ79i¤\zf+,uOZ>VKwMmڭa[LPO)ȣAarmX.K퉳p]P%Ӛ`+u$ ;}Hw]ӻ&dQmiȚHc:+Ӊgy?RT5ȅsa=76p*)cP IxZĜSÔT3D6F+׵.p&ryyOȼ0r}Ȉ9e n[LG@AM؝L} jd*"aw;* WW)ۀ{K>RYg+uRsֺgɲYFHz쑂lDK󞫦^xl Viphbyί  Ӎz&YfE'P *TG#f\k\ "8ODm#`$SÞkcN.9n0q=6/ܾ{H[{g_àꄔB.PrD\ U[B9KbAhbQxɩCŘS, dK LZjx.!(]" jrICКfFVsX)ƙ°řO3r|y f>ubۇ6/gQ)KJKm(:$.3G;z#rS3!FtNGJgP/Yڐgj{ɑ_S\ಗ-O@ˑy)Z^FmLb)])vxcH>=zYH6(9[`1&Ä9X(d"\.(cI1t69kL/g56 zvWPKqvN˧1-ۻ5}誰;sVu63g1\*dţ;ݯOϷrc՘EO#; ںn'3=k#l:2kG秗e2QA#}X.积Ӱ-V\r%tǼ~f~`GvˀE_F|sm4}}g=Sߧ詿~P+zj/>;ӆy/O /?@dE!Ig4T2HYuNXƘDwQ1Zʨ9!9[ -!!Ţ6\*A. <.쓉fy21K1wWM鈎(8bi,m8M,r㶳_kmމgymȮ;~?9KI;ٵV~ATLr ƒq\08Pebs*D@*#""T]xP-UESr"TK` PC `M@mcd$vx*%TʲqjlTlwI$!ZhḓcdG8w4"9|W93|_JnC  >* Lgg8BFkChw(kH1)+!09%k)^X$Fy*]u#}'o_Ճt!%!Ն4v6IXQOӎN."FaGofG h7ևb?ZJv Ygwv _F|W}:_ξrOJUHz'FJ*p>cVi-emm"1";Ӎ0ިM0 nб>f,s@)@$o eYrY:l~jM\PMv|淋՟~̦&g7}~'BNNkկ'y'Ov~U[znQ'4GC IUEOjx6/$uLV c[˛&Zt'2 }`Tv,+&M/Pj?1~{K&/΢|oLGֳ}<5Xmcu޷[nXIЕ +l>5/,уpY oxZ\L;.˶ᘎ<"8[F v7.:8qN tW󏃩=_X!kej۲Gj2g <_ɐT]ҋxgr֝krʼnxqp[i9?=پ' h :'`!c :>@ %KqIQ)TJA=xrz~d6담i+ox^>|?&l`u"EmTIنzW+Q*&2,R)*=^Ыi7뛫gvm{DL7ˈ|o9vli GR-4t0~ cS3*vÄ'>M>s,T-țBg]Iu%EQ-g_2#WyF8YZOdٜls6ATy]p>LƘD$dJ =0i?ԖfQ" }[jQB132L FB%Sf즪f^[ /E2VL<魝WI37f~-vPRAS.ZDV#Yj3ߐ1P2!31WF!Q88`H$6l!ӳ-9$egjY04m8{~JAR<5ۅ2S’1ҳ_&핉%8'`}a)L64?.{P.5pDRPbSR 'P#(%a,)؝t\RlWbtq#+6;(istoGA);k-eƒNF` 3,ʜ-IPZ<~s3vPz~ }L0 ٞWo#7;\`K6Egϑ$&r1[e5J֒6w}tI:21JhRTYf2+€.YL ȭ=V6E.u7cZJm!KHtqk}Qw]*B}  /XXS@IlwjIdt۲b|A,) 7GV*ʐH t:JiM, R8{D5*Ͱf| #_v`0~}S񆜐=y8u|/tbz{q>, l(@y`*FK%2 Ab&lB%jGCTJ蔔V6ٙ zL 0Qqg#v3qv#v^⚋y,L;D~b-Pв#j+ V;Yd" #{lң %% +E[tTY[!@iu>/YfҰ5y$"1qdbbQdv{~^X~1"GDLZ֊R΀IEE( `16E d8/5zJř)!5AI+5Uʭ[oψub\/mYg3-9Uc\T#.U=8 0 #1@#JJ:1$X_[iǡxၕ^vT\Tȝ_k/B"}#n~|G`'֮R7oxe/3O`Q&?wR|GqM[Qc>LgwxK ןj͌'ׯx'mx{A9]]9Y+D<g}[GA!$DDCnؘ.nn٥XƄw_w}{3YLݲBX>;7|&7/ {8r-6N7WGd?5`]6Y:2;Heg4j̏SoSJyMԬ[J(' :PZ8:Ō) DȆǴٛs1VˊWҚB"H1II&:%;2~wL=2C(DZ&E x9/,БϨMpDvV˧'g,$wKIU*B«U^ AY6쭋W-r3> PZt.(VRPEHg`+SKM./$Er̅(J^i傐lxQAy>Y F@"%"Κv1:Fa.D7TM$ ɐŏ&>~-?8%S{Q}XiI'&jquT4IXo9Y„,2O*dl" f#LA:538H*FhGմN~"x7ޫ+t8tdm"̼ȢD-)&.[ 8Gkkw"²̨]AMJe39TaD*$ "1 )nHD'Q$Ȍl1Oo.DSMnrrwSoMqx毗^No/7p*?*PϞ` :-fvb~ ˑPOE[)Z<U׶n#z#J% (ՃroP)*hP]mٱ7|^c> knX2]^ۼgG~MX^нZ3Vm:siG+ZBDuUiʆCfrE"c Т-)@/qrEy-$$U/jNy8YAV}j k_/纻l5WHg:HsLo5Gw0uUP6h)F]!٨n"s<}ةx0t5?h;])BW/|AFZg t*pHtՀf6}2ʮ]#/=Isz`&O߻zZ a芞@W$tKOy>Xm(/N޾YhaXa3£ԼP5oYA/鏋y$$ ~|p^MůzIye@þ4P5B/7]6*\LJBWP^"]`w*Xe|vCPa 8;zi7]16?cvϟ?Hß?hw(Yȟx&:$C+=`^]֥7lv?Q#J{F?.qt|ʔ'rt&}ܶ.ac^K)(zwY:^_v=.ޙ[zn -S=tpX_?88kn}FJO睓3Uۛmv,,+u׮l[¸>ccɰܣ9Usz?cZ&p?[j?xjJ 9oWyp#l̈́9β6F.,UΨa'4݅q@%x)>[k?ۿ~ćF-g9gKz_WIm?tOT]By8l))klԊ;YjQ甙B3 SHRUmjSUq+qn+kGajǩ%udD1ҁݜh}k5DXbkWZp1"17FR)mv㼷I`b4@./O4ԻfmV)UwPRmltT92T#b!Y$3.c"{ B%ލ}"Y?}7KZ2U61k(Wmۀ+ tEt(4q0 !?ri*kR=^bØ- _وBxHcL~~kǩCVTHk0^yü 9YrX\3 >O͹xޜ@# yrgk*uo-TC))*&՝H9iE%xuÜ|8.wh- V\j() AZU!pզm]"FUy,Yb1`Y\S>Em5Rܔ 3HT'*HL3KF Ѩ ў6TnG0Lc*yˌWh|oz`yX,ڢ %\^5TTlPtԠ-;< ƍ]!;3mZq9*g %ʉs*j]vuL%>,d0)φJgǚȭLt%OE"%l0 Ͳu{|KxٴP]7.:zyc֡sVBE(kGU((NU$ԓ.e ,~/H1?'رI)x X-VLCjD*1+5Kl`W&q$8N8'9Z,̼޴X ܁p*6CA adܠѦ!P CР,Lh=4v&׹;f[*QC(]g9kƒY ӄQ!6KsmǍ})q8tR 3=%@ iAA[:* pqYi TLgJAQā.(N& gYV(h ;: 8匆kc80{$uYcuz֖yCA e`8N6>lܿ\ۢŌTUqǬsDq2|cL±yUC:'D_ {0bq&XvX/?WW |oM/ L} h"X o b9PT8xig-:J`"gIW=$B+ut2: Ρ&i֣YΈ A9AJP$rAVWdo8dͭ!`F`ì<044/! Oӗud PG-A@8o]UBN5~T}tUU lZ /k$ S`u0~Hv?n1.r,di*YrS uF[е 2"hѨwPSvz d<Ŭ}(Tʗw}C=n;HhʠvКf6CܖSBGٔЎa:9@'^BkڝChgՌ6X\[v#h= y7e#kvq5? F#tf% TQFiv%&HtyP*Zho:"vVWpj,:fa!d*)@gUW(!hDpPk>Zl?vXwHgYLg&)&P:nڀJVdǣx/GtIfmw$aU&0l3辠{ŧ`hT ƃ![[M.OƹGyͯW]Ώf1]qI T  BzLqtj 4zR6 D(IlQfq1j05H<JVO ]z313@ys0WF7*3bO:V z7$%xKT]0+9]nDdh8h5)QJjn *TA3Z`hHuUڂ4f( ==`-yh`cW,XB|QDqmb}rӐpM6rYg^UA xBh zsc>j4#paw:H9-0 ᧀG:ʺTuc@so66SYRO -zYTAljP%ؽ=7f 1KCbR 9OkuPz:D=no6Oւ +w6UAi YVt@; ZfSZش =W&= M F9`%#p"{α`87\ *w^. q@D!C1+5PVR&t$b4u2sL\- H_6u<D;y0B.8= vX^WmKZUmr_~ymj^2Ŷ% εD.c򼍰Kws/¦'nW*:$87q ZxN [5H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 : 0=$'t 86@A;zN'.H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 :X(rpԇphuaߝ@_I@/ Z+c $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zN ~a hy@@8^8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@/ z^xXj:F^^vo]Zu,wiݛ-tҖc `.˓ߝ2;&ˋܨ{eqNj^}\"ROy8k.Oj[/]I{=xP_0&FYǭ\sOk@yz9޿} BbV0^~uSWvVs_ DZ_i87qT02A^9ۛc\6cɺϢM׷l^j0Dt].UqNٻ7cWE@ى ĉa cA.#!%S=3$rH \,A3ꯪU\YB _oKiLq6ab>^ #`g/­NK2T=>.e}~{1zNRk?~1&i Bw5flp'89SD"tZbwU&їi`r1+:BWLy+[3SIB+W26|ڴQ ދ9D(/u ZOJ6_Yt1|^?~rWdRu?݊eȗ,_rc$3?ՖAM|z{3'i٠hHZsTo& 6߹/Cc!P a4quk)E0rmA"k0`l<͑)P;]*gꫴ?f!ES2$P&r B Q2('^Ha ]R a6R<`mؤX3Y xs'\paP[g0Xye y0ƃn2"9:9[q5_ =_5{n_hWOX6"IS3/z?uΌ=Ыq B Q+= }ea짓(M'ؽP@r"؇0EfnnyM>8 CU+Xє-6`MH\:$ ]J\am I¸prb,ެoワ\I߅J?-U+Ћhگ?0O=|߸V_AK&n(DUfw}5*MoO׽*.7/^.wՃWu`MKX|pp{W-뱭9Gzż(' o50C%FucKooi ilF*L]X^âa<)`ŴO@6|G卭e'Zm+cJUypYXM/KyMb/>˂AATLR=7ށ8o_}ݷ/{o1Qo_߽0`SibOfOb|]@.ʸljl` L6z8?߅޺(,6Zؠ>܅qoߥ >5 r7/ø$޷v($m%?|j+x(VW?/d$p`[P+6n?J'Ɔ =?_WS,w+X+\mh ռ^֙CLѓ!-Xd(86+eeLqXtֿ.KeͲY̘C:,ĈA]Qg ֳ\t9`-JZGy Ea>U ,&՞k:FY ‚SAgEZ` oP{o> K<C[?o lKwհա] x*~r2<5Kyf0.au&gD)1lf%YSp%AC`g Ϥ1hb$`VE NU] \LNOcg?q|.إr|7Iɥ7Y}wbDԽOl~3д -7mLGCKtp20̳tKƄF^f&+:M{NV5.Vx[-*vZħ9 (vӰ)Zs=yХy˒Kjn;=+znםIyD~7(g.I|j-n^hWw~;xt@Yý6Zmɍ&/LvjSMBfEi̙w'ɰ O:]"ɌB2Ì͔Ej#սa5b'SOiu^I[m0D1+Bӕ9ÄĞ'S.3vcJ/_dU jydU{;[G7P5v'TUs0eG\VB%HMIÔ6g@U˅,,aҌ1.3(3ވ^^(TM"\!DsI[:٢Ar" ֜9*XrkJ'+ 5ʛfE{gN|~řO3Ar|\!y DXPQvK%Gɸ,?Pfٷa1?Y}]X|,tB%$jzey0դ^ބQyX){ ^﨓e;w7OcՁ݅{OJ;$vr:f T٩$X(ejCr:i,*N<~O)qpr&h>13 95@`̌"s:$1f7'nOyfvN3-6 rQPtb>XڝyW0Y~Zl:oUwK#he ȬJKx`BcQ1Äw*)#HHCQ>īF8FH\U1\njb5*L*8%: 9 "=Yidj~ov)}ww-s5n I*G.g=D ihtFKN(ŀ8c쌻1wu#&ӦI<ZN{[`%8Ս[Mu d:M='7–c(gQv0[7 vRw}7PLq?]Md6 >G>4я&p`0K_n +ѫ@pЇ"zD;32 SVp wq2^x{kګM)567 w{҃m|pߤ`YXpiQRIi =TۓCZ}ÅP,]8sWI@֗dͪoenb ;+\Dl/cPnWiVd2U2R\gM wmқ6; Z\ ~qp2[W(0?1OpNeՕϞ  9qO0 WKmZ}xbeEmi+"2~K12I:4-g$n591хG8b1/M4^je:}[vӄ(ͬQ_X "m::n P<L6߫#3`zXϊSHp#KՕBHLgbsBJ$,_?; 1j"y3!Ήc^z9;nZg^3ϔa,0RF<v;9FD+ l eYEk5%Q3r΢rjgKsz99(HZ( 93>煲ɂ7F*G&t U}䐤XtHj*e;Qq`8-RLp71T팝Puy[eu(zb^ͣ88/ l/ffu?ۛQi'ZnBjidZ' '9GXkaP0ڢnpHq> %M 2a*r<R5;-ICKZ) _)).N x` K0#舩%Z;Ľ4{V Mh&`uoB-OfKU RYU4c,P!_%%R!)N3$EQ8&%1Hvmr8USU K2Rk hF]5Q퓍9 D;E":`'  VGΒ+.]`c{dc'cڋ.(H,b[yŠr#-j,8:_-~PKEB-8X DIc3ڃqW顭Mnp} 7yΑdFxD#A"d1esTZF,Kl1ʠcU9n %uN?hO[1G ҒQJq0/*Pl8J:,>U%=\YSjZ CoE*Q{:x`GĎq$[Λ|^?+8%:(I}Cu Lms5`\oѭ8*ԁJ\.ܣ]`RBPyYQJ684DOI_9F"!\,:(mAidS´،BUiA hN:NIs S4z+]Lr;g@GB H6?K@FAkML(ٕzCmRC0\l)d_Rz[.f_IO.eT^3gpn"y15JRH퀣-+N1&@#KB1YϢhRNCTTqG.!HFb5Vbԏ7{ففUf+΄_v82 U^Pyo4~戝Tb SSCMTd *$$H`xdr^J(FXTqr"#{YMtP+5Q'툱 "aDbp#\/]L;6D~b%P{Ӳ~) +ĉd uQ 1TJ`4F-L1Tv̧! (& U>/RQtk"0G(d$jbYMP?=q_~l0""<d2Z, %I)`"*4k&n QhAշVFW-(-06)8Iiʣ,e#@}j& ̇ҧ[ g="^#.Ŷil0..p^FfIQc|tkg0 K!HIdT hs]a)ya<ܰ2 lMɶ*f#׮>m'\!Mhr;$'fM|bo S:jWQ|?|Fշ  4"Du2Qͻ~+xVtRtcrUI)Z _}&SCeDxox5CZ)!Bl (<-NBYnJTVvZ;ԢӉxN`?!ེ l3^8r݁ou^#tF{|ؙgp E0k܎jtA~(?Aw #%Edс3\?քAx̑x,T*ƹyi-!F&g*"4x8ɞts=ETw$V!L!CHMQ4 D:9菉 )[N{ZJ:J^-!1@XJ 4  =:LDir>_z G"rU|Pھ4\~:ѨX%xt,Zf>05J,hAK(-w=? >+րwR.YJ)2Gp$he,E}pTFs9P#q8H"|Ù֖pKbF)8 YtƮ~'3c}|Qܺ|O IjEa('}J"(i~( Ch$,I`%Ms PwG@MTEH+^ZHG$/^Y48WȺT Ct26!2D%hה-`GBlD|8*㙏{"W݊Ć%7me0O3N΍'-wM5RKT"ȼwVH.I*Hi\Rxr+"8!4l()>PLE65fu;Q'c)`(rU/!ZHQQkǔD QMK\e,{n.~HiB!lwbϝq{a4k2 |''3x !lԒĀՂX[-EaNeCIm  60U׽YuoDN_풦7z;&piy.ͪuGUnz(x,QhQA5)ylC=uN >~5FP㪺k|[ټ+8Vˊֳ%lޡ;vh1+^ ~1r x}^$jvZӏ`΍jIp+=PGTH.`}8<Äk)`;OO~/$)8:ej׹w?'26b#c4g2C@pLxB/ "(hpit&PKO\xАH&Y9ZHΔt%.g sƶ^,6|94ZnTVbnf<9Ν/x| :sτN:[Q&td!H挐 @hY`ewBIc4 +I/Y^jw MC@H)FI||sE4^<:G"gSکk5L ^A!y 6v{Z iH!d/ I#5D"Q%MPA8"him& $ qj|xԖ eV1'ƪbR8[?NdžyCBA/#<=adw_y)fvOkw5>kcI?"@%C`iRQ#:r`z.B`LxgXJ:28|'I֌ɠCJNN+6g=;*ikݓݟxʥGm-q!,Y%QVQi{׿XWxSĊkxϙں\*"CvfmQAOaq6+<7Q?l!jRP+|Eem%4A9C]m84QW_Ͽf+m"5#NmLu^!u)&W|<^LӷdQmQgZxrZH4^͌kUBV㻒j o͓1pG7P R|06'U7I5ʪo|j5,m/ۼ6'7 AaE|LsGy0~_Fa~oU])aV]Ue\ф Wo8SaFegCɔHhFזW"O7?Ir̅ $Vīvp"0,TT=~Iu7n,}o|b_aSdx}SyeIy/̴ /JsK2Ji:hpeu"7s>.S>Ng 3lp6S1 ೯yU8a~Rac*&U^1wU=+;WW4gAۛ&-nЌA6lタ~.:pϜ/K† ͷï~Uw(.qjstüS_6OҎOIHq?Ubd枺ܢ{ʵҐ/qOJ fh)RwC%t-[.i|s Co@̖Z<_k-ykI6s (tUKZXlk禓VQeCc^(94F6Zv7 .(',]Ǘ 2Ol^+O|JĕO&e\ܭ 9DP{{ #TG?<Ú.BlT xƩ%şs/9忝d'՛<&ŅhbZf"D2]5K#x7 c#M.,ͯ79\*yna`儢4[w35UK|D$L&df0 ޶5ʯU[ydO }O&=V-xv5\|@Rp5frXlq7φڬE|ɮoEh;CX&ՔVcWv}[(f<>8ׂ 6SN1+(7Rphdia0aErJd?8jQ2_M NW F\ _>4_\ a~~FknG;5Fi\-*)؍ F Nn gv͑`wkAS?ȇ%h\̓f7׶Hw;ɳinZ{0梗F]S"%HG<`k p OhCp\/+v!h௃eOߞLsUvοtRճby6ea{.KCZ *KV(#ws74z&մ*Nn͔63&cK7ek/=85.^P6=SۜG;}hKOcck)x4&`Q$0|EqG[Z?%[,M zS3`f.73LQ5wg{dZG2͔v<z҃ _][oG+D^698#b l, l,}H.Iٖ߷zfH)9"[]S]Uu]\ΤLg=>j >X hʇUpJ1GpEq܎?% `l2K458YN{)vQHoU S,aT۽jvo֓ӔGy 7S3I>_)q=f2> A^tnE mosȾ\mYS$[OKMWhp00$TBT]es1LgV(e- Hb':e&t))hmhjx&=AY3e\ H AY- )D F:. iX_R!Xr($i{؄?IX8F.Ůe µDX/5^#.9 F-%YS[Y5e=:I5@(rf&(R 4oǠ_)HGTس(+dnx<6Ne"HOh2 Tfi%Ž6B@aH&= s- M`nG_͕CX)}@25G \4BywQqSMM6R41ͬe, 40 $5 "t`)0|"' HnCGfDt1X16489 B{oघ 3LR(aTB'7.8^$K&=߽5{?$my-J0FV\aJȍ ^~U﷿ݘ7!$pըj!qhn5?gC_z#v(זƥ ϋ RL݅­_ o/PgO;0! :Y.9|B.Z.nwU+_% "' ˵ؤVu~CML.Dk~[_U~;W?0O3є5 \pjRA_Nms yƠ* 6uhw7U`M^sk>Fq8)֖V#x<o0!T3+nn!H%k2;ش>ƹ|;&(ݣ.&Y7j\St>,e =>?]nRʇ$6E>yGxxs _@]}oӿ^_o|tv7W~~ '_`ElX M¯G@YC/Z54*`hۜu -ۚr͸q˻呛o J/{}~=?f.t_Tӥ,SO_|Ȇm~ qe9U?mPa$-'L|/yue^k[FUb~k϶vW#Q0eq%Ĥ LQJclG!a>%:ucBer'5́w:ɇKGE,f^/"haFK[ΑQx$Y6ݝ΁w:0ɾukgh' zdycgt`cnvξ S`c)Xߑ,?ɻdղ=(O|WHR٢m$8isWVWp5Ȭ;հ!}Za^omƷF[+ 9:2E4P!)nb΂wDðq .,/T[\khUwq{' KkSۧNyV/HU!5< %Wry1~nɾ8I-hx Nffl<,z.(%w YҟwЊH:ovi Pr!(,K4cz3 7"AvPLjJ i$Kœ"Jdɉ0Xs`n;VV@Bgk+ j&d5G,<5ΜY3g[: 3AL"Cۻ!<?Q~Xʹ ^:l|8W>RN|]N/ϼ.A}FVK> vD4me'#kxR`B<Caj3j *` 1h2l*lh>֤ 2;}Ɠ9To`ya5.=]pǓyާDm.i1;I%t/vwzf@u1azuhA|uV=,he=vjB Zf%6v>yxe=WZx97(6ʲP|uOބۢ:Stʬwd-^bM#x,])wytH*zUstg9HQ`^LD/%}TV}DF]?a(ƅΏ6,6OcX<( d֏PpR2#ǵ&VycQ2aPg}D2lsㄓٮKGYT):̢ÁUXmdx\8a~(B53n. r1"*-1 XE ީTUF&%-h|D8DH\q': cF1`k id&G EJz[p,i}b$v v? DivJ7ٯ\[BHK@'l44:%'bprnp1vubbBIi4Atjjp 8LEa=F Y-Dq VEn ƅީ~7O&i0xu?zߢw8;{ƷWp9WuM}? &8ކ>Β >fiM>9` GU.pX^ rBrߖq]GD/;1#?,oR_-H5/i<Slc5}K{wq̵u7ɍ?w^{Wc6]$~ I(bɗr㚙h*ỳ!E=r+΁7By&1~A [_t[)Y|nMr;{l/cIP?]/,цZ>69,T .y%_W.v>,l x2Rb.{oGib/QQn;\c{Y&Ar} ]4ʋ JXol}`\Mρ_U$< Aƕe.pXCd5b*Ug㻩 @`1X6Nu4X4Qw+^Cv" - `kwut*դQ֌(eϽGS9l K4"_tyx7@ۈ.> d5ˏ h]{W; *{Zҫ n8J%PSdY^<'|$vLrG\/jQ]tjofr-yU _P5hS.ֶwAMi!B{'>'u8 ,}sWNL*mU'xt#y'&{`-2{\@bXvm|R| 6)#shƕ/x4kXbm;?QU'6oB 0CJ;o4x#cZ[7ӄ(ͬQ3r9;3NtЂo֟@>7F2.90 8{Y1<> q.RERřbN"x#_kxc_N@ 1:U[ƅg/^pV ;ϔyGhu,65gEzc[ EgI2,ɢ5DY(pS?)K=<$ӞUIeZ{ʑ_mVG,ٝ63f"YdYN:Eb;ȴ- :i[Jto=N|(9q!kœ$6 cԮK}jV5|U$c? 1AԹhUE(AFNgI!xg :btgMPu5v}sMHBEI&7-EѶ(":+Zn{uj.Sh+䔲'gFk1T,0߀RL1ʷ |)hbFL؂0V=#1g6.k@g[&Ύ_BC|bs.lQttj'&VmCtޫ$dJ-"`S.~ACMuՕXtgVR:p2 Mh J\ FÅ+:Pn{\1OyDC1ҬaPT"dFHH0!B!q9bMu)%orn֞4ѶRۄZb(>P9-,V׹C5Pcu q 414yh8Xm5}U\MUŪ-b1;cm;GUd&iLixĠ]kUu"Zg`T6)WXl$/%PԹ*M8CzWCO_pz ,+C {Z>ݖp7£/n>4tFETwX@KXSa2Yg$FaOޙ]/qꂴcz*O=nzr(-3ݸt߯zk av% Y=x3(ˢQN̢)bV"l|ۗcԔ: Zmpƌ.)k \v@UToMƺanf< N;0~vސ=!;ɞ}e8aJGG'ξq]}5N 3hT.VlhJVER2 VZ2,L)l[dQ1ʦdin=$%XBNa"CU#ncr!<M;W'TsAE$E)αѥ(+Z2;vC? [/9A$Ζ*{(>J^T+*KŔ?&v8OW[қc<M?G#xc Q1K!;j6cwlHTPJv*ĒPT?FWG"$7fVꃶ%Ϥ2{`sL@` {n&v5ˣNg+CuvӒ}/ި}ʒSFL T45g*Z=S qŻ{iǾv{2s݁ `ŜȭG bmFt?:Mu~[-`K'荵$8)XSTѽ= :?nK'AW1J`sèX90^0D𬠥q'e䜾At6&l-Ę|6w0 Q$ dsbh7qvhtz@=Zw+M7^ر{;x+}<^^A-(~ @لIӖ)#e$2JQ;bvb98O;7=m˝.7;}{<SDNZ!m؂wNZ@b)ƺdGE5e*hCMu +2>Umg;&Ύv6Emto&`Q PDrV(XCL5ckc6a=M÷KG~`pͤ"Т+  O6 ب:Oα A|v4'#m)Y N$U@ݍƦ0Y1T8GqPS Ј= e<#hOhI}SEm~(XQGg|vzJ֯"m,O!yGk|(w0*UMF-XUɗKn[RY G]VAI#b<`v3wӐ=Cw S,Z`tGNOYtӕ΁DCY5$fj6)3"ͣݗ ӳG=Hc N甦հUeiF48 7nG7B{ o7ؼ"^Y㛳דy]e=ysHZ}'g֙my/؏,렉O7M6\{֍BW__d}2w'-R˅&_LksH|2ns?'7=ų㓏0VqQ\ֳkgُ$sBA<,S>un^B]]B\zc( D եWNJqu`gOQyrS&M82ԠvXjw㒀36Q4h 'vR|{s'6oÆ+W3ι1#u~/N~w[kP/%kȡQԜ!5krqQ$eqo]odz.;}gb/e:ۑF9L(xAo^|S1B"F L$sprNαQwpr989'sprN989'ysprN989'sprNo4Xc'sprNnjs@989'sprN989ƁG;H $wn Ar7H $wnd{gULxD$wm䮉c!k҆'kR1)v+BEv.fu7/0w@y>PD_\y)MdwsyGUG_Ίu]f7cCkl*R.U|yo)uz\"dhqYzX}I1z|'y|t*0`T``RuvT\h.T\z0+o6"_;"Nsz 4MYa"ߝ{{@6 uiH(_ r"1\2(PspFΐSԮ|֡q!hbƨb!ȍ0E }abZ*Л8; koz@+tkcz rɱDRc3TsgUC*DB,h+#o9 r ./LLƕ(^=[|֗ƹr]gkF"j A $޺ffkqP1/ mh%cE=]ZFYeJIYyh%NʹRr 5J" 2J␶<4tL|cX"hU1iJq")U÷gz8_%Y<8s`gw6WkjȐ#S=5HJ2lYfwuOUuUwWwK+r5؝,qP(ZkeebԧpLDkB_./f嘧sh =qZMS' jj%}ڌɏٍ9խy:Z_#_'?\.nfw+b.ոZ[kq8eyoP|jQ:wt6 iF*:& V0bŧYzxcrQ =j3ɶQʘd5"FF鼬w4fImMS_mu S2htfnD˫7~p)A~,oY"3^q0Qj#TY-(<,TGw:UՃxT=xγGh5sY5^PwoDL_c7b 0uVؿd'Uy@e`23Qn9mF!I l0(P Ӱ[plJ@Z'R_*eA> W]nmtw~؊~Zރ[3] YHy[-ߍ3NIt6EN93+O9 J:j #ŽUo^Ik0lbVj cs0!@GCJ9ō8`1wa48IPƽI[]P3m|ne,n<zS#'M۹QM6do0UǦ3ثOej3*H|yvu=.}sMm͟7ǂVՁv`{au'Wi8 kġ^΀Mz UX +j@g9MfMܞmM;񪾨n*|ɔa)D;1W[Q-=Jze]ZUOMӝ|LL':YzjOooBB ϾMw [?GbIaavSnI[_VNQk$Aɡ!J.s;SdLԈrF5͍!O^9Q )aNMт|+{jLSj 95@`̌"s:$1f7'~;!ܔ719QnRYg~3wiMViL(XT0JŰedBQi6ʇ8hh5aP O}4-Zdчᐌ}D(۳6݉^C'mw-!I%ȥ ~ N}LFgR 6 g qo+ж=!&ӦIoy&`3)d%@)[qfQ7Unuo/Y=8 n9g%8}dq>F=qӋ8r2nlhOS7l<;OAZ"Na8 WUd,E 8&p]Ftkc-}K{edXw3|p>9},ƣIJhfeş$ˍ[f|Lc )z2zzgK&T|@{߀1KWoiJ~is3rev kG+4nr'y#ϔ!-1|Hoow.v@ {Y2Ҿb>f<1Ӣ:).>Y7z>H[ЉHJPRY -ٟ&T*N]u7Iتnx+R0v9el!F21~lFtu;Xsc̱|9MhҘФDCⲴeD4h7mūԂ:lQ_z)qi8~di4xϫuXۈ6Ysq9wFa_yM\kɴ`սul˅co dyP>5Uh9Η'ycMrlq_bsӮQ:Lb޷^RJ }||ܣ]x+(l6`frR! eq{y^URp D(3)Q%y( ZS8 0 ZqHÞf6Q9 * p* XEd:P>B6xјI&rrU2F.ܰ4*V䵡2o,"i)B1A ZzJ" ʍn#9)dOY[papTǁZ5 nlJ땉:)6p3:'Xh-Xɒ6!۠Ln{I18~H&?z x`YUIA`$ àQԌq5As@Rs'}G5[cR;ĬW逰Id^ 9*5O +c7r::RHTMa9m=2cy]v@S6{X>}>_[m-+L֧1YfU_ٻFn$isKASv0 Y#$y-K~Hez,G7.>~du^\B3SrJ xR'm,x$-h}փ+2k yp}WQ7jTq<&drKmB+x{EF͑1!o~Nj>>$/vϫ<2|.!+ 6y8K  @F|T\ KQn@I̔PB+(1@Fb acwvĹPL*͌hi,P}~PxT'TE?+poVR@IfpL$(*&RفZunRQmMF!0<-NГV.p mқJL U+_5R iK9p\WQŒ7t7wT?8ݓ5h|#v֙+2L),7vUޫ})rRNV*m &T8T6{9!p l mAK4ܡɴce.]S,ެhĮ&&c\b jWӎ=Q{h 7_ 2q2w 0/iT(I,<'eZb0U0+m| E$H'n&HE|ET'}e<&xny 0 "Vӏ}*#" 8 V?b2Fl9(LTEHJ&p svinj`IS"#Q6II.kLLӈdc4FX;䨚8#D:.Η鬳싋2.\ܪigx`m@τA}ȁG`AK.2JvŧŃZڱ/xg'{zvJ5(Zg~4ߔ|CH/)NfmN_?}G?9&DW^fRO.O?~.^8q?NZ4Y^sRc7-.Zt>eR$nĹ_gI_xuH\](4/0$B"ݻB7f$_M˺$te-6,_d׸iMKbF]'K-.|Q˻7xށ =`O?L×](" 7^W#Jlpy6ߍW/v|a^s~.zkwWש煫Ik@=\=Lʮx\} +Z&-UéNF߭ɖIv#$Vؽ0N|pԉ'Lџףd5ERTj~o@j0r ~ mv iX2b` R`HtR0%4+xApE _ \qj/HZءUR@`U^LNۢmry`4y_RD}*QoJv7q&,)hp/92vcxE_WL"e)Lyqd2__).F7 ?NRf|i:"`99.$A5&HX29j=Ԭz_x(8͈w0A(9iY0&XM^ŀ!Z} #-Aq=)uv|P>Z璥~ Q:!08'&LAE"l`e^褀GcI+`TH=wDy A@PE\hzPz0Df5MɎjkg|S.R=voK\X=C*Ne]()QUd̼ b;\'0]Bu3lstx_U3WqU Jk!iZ8i#2M@FHGB0PՁT@e6Hn2jHbgH-GZڃ+Jnб">,*T()]̚&r! 3_na".NZM*!ts ?ZfaV[ҢΛry4g~Ij"@1b5{<{F"ȃTL4B/ch {WݻrhrVj]^W$p͵uCMz{4=^پ[8gZ6rA`nMﭞW:ء絖a6loyw՘wR]3Ͼ{:k?uGOc[j]r˦ȻeUO??|;aoL{s r|.OZE\_s|8]r-Dqp|r%+_G%n]]>W8X{z8X|LYw0V;?_.-p2v 鈖}-iDxl~IH"JD KP^Āp# =*_kM]R.y6Jz2e6`YutH`Ȭ u|lh!]՞Iqq6~ 椏ŲCǴA;^V\~gӗ?]CY RB2`ԖVz11Yj FoMs-< “ B^`A qcW mXJvXץҤ3#"\+J&g/.}!$`noP^91xÇiZ,Q?`ˠ6gYOC1+m51y@!9V ɒL(>hS6Ixd8z6Bπ<B F#h/OL(RaЊ (+ S c5"]=~d-qަ8}% >i8"lc<#+Mգj@[ߖ|mGg~ gP X M.So?6biļ6bzv"gtm,GmI__6 HcY6kT4~ee >z$\z%u]7̮Æُdzv֎{Sbz|zizQ]7PTL;iŪLܭ<]MjeϵPk!zx~9+8.&_?Op["ZZ.w[,+-c\&7?;:q |]Rҕ'}?yQ6XnFE Wx2_<,?_b"AR(lzI&וCC&2/[֝/t[/ C` 2vMqYڐNj͆'jz-^l{8U^kcTV1@yIJ9ZEfbyy)(ߍ5`w;_NӢ],S,3 {foJ"'ףh%}zM$\qN'A2e-qݾj=\gi{2mu]!?=޾M^JԚY;GlQ19 BL@M-ȫ|K\:+5Tsc y#EgDH@^K*gYdSnneEs|Au8^fJ3j&c[PֶSsk}hRJoZ8Si%}?|W]Unjijk_b9.vZ~riI'sd/;e9_{wMiK68gC0%zN]fzh?uRIUힷέ]qw{So^ cYe%oRd/3ihɓHAfk<Czo3Yω`g/q,e[bj,mjh7ֲ2zCUqY})yTX !\sn` h}4Vh$;%k=xhK7o=s;l3/EEXT;%б@6d=WCYPhk WѤ3ȃZ#cvrܞ\&"lur8/>PyF8k]qHT)Ԁ7v >=0jU2|sl#+gI^S` wq2{.S?*nE_($U!#ܓ:FZ}ZwQYUYe-ʀQ[QU9jF h}B]JZڱ?ᆬna1b3f(5vُHډُdlD崐ғ4iekU-蚁myۖ6 bN3h`SI !S 4I[aƁF1c`1Kc(<^|)֓(%68FnL87AAYX{28k$:ݢCΫ02$;l­xAeP$#MI0u( 4LL|dPư1DEҞce꺯rg;< & <\*sNsRL4:& ^bT`ϲFHdP qLEVXJ杬lgY:}Ll_JܠpmQRh 9CʃLIPb6d$"f@ e3&K^O?Gn&t0GDSF $P4<@FVRtRġkj xv8\@tD/Knw#!1Ff([y*9TN BҺ8N5ppqֱƲlϋ!ZYK'=i2Q(*Cxp(JY5)ތؖkfR5*QSӮaV(2zYyeKV0 ,Q,CIsf:prD;g`lv3WӐ W 8 z>.kǫq蒶d^O%eK6ydm2O6)._.'FfgB" л\n~痔K콠#yСdtn{}7PZPlt7qz$ /F.GnRJ@W=!$Ld&`$/:--3>z.ccf4|_Fj:PH/[fK/'3<M{={6/CO~8=S7k`fg>pK ǽ=/~h3rlr{ ]{uH7GY碞ޑ_>z$(voi iD xm;T5 B;9;=AQL˶n5gпЩ v(P(G2/ 0\YtJe6ru@!d}ڼ=43KijBi%T[.%m˚F]1w .>yᶙA+iXeM1*-(#*&70C!Rnw[xR=f6rҊTKnY9&s)tķ;|HFjƈE\Tܻr|Z>4q]2hβ` 4δnv*fmn*SDϲeG28@FnM'φK "Ȥ5HAPt6$Y&$pZxU%rKEf=prXM5ʜ[>nSi7#i>8= qluwY].S'G5 C}:p eNXNiN0$$4e^ V=-x`m]Ղ*8yQT8!(lFύBE-=Uvjf5qnq`g$JYl}\1EOصW#!KpJ*Szs;#Rmem` It>\蓳Rs%Ľ${}<^r׽ߪ8&mM7^WZZhυz!,}4kqr6S!ZD xKN`2nZAF#LjrOkC5 )o hw κfF\v*ږ)[ۀ^Ai?\E{-I^1W`}DLg 18`z& 9F0B@ !Je2{D.K u`5ꭷAJ:P(99jب&ΚaMhʛt`gszN'H'xMB1gY^im418 dRLU6[lbj|A@j\2YT8WU2G0`tXJ->*:k87dPKU,=p+T{ 5['g㝱^2u_cS6aV`!)gLX, Mcڦŝ??NjhZ L3% cvȭ* )PѤlgHu i',2xc2^bNjcx+(J+уP$&V &}ix >-h8iY:>2'uu;~0k|r*bd>aaa8(6es䇉[l8$`xǩ ^5W 9q!À}n  fb1ZJ.$OkfdHrt2.|["Mu79|h'E|ycSfwNlOfMyÃ⡖1YZWQ_5/V_ ;qT>{}Fw`^5izu_^nķ瓣遧a|.̉0oۅɑf8x>f;O-t+[U͈kV6,/hG,XF㬣mN^nUnnuZ]Vy23) +RGg<|(7m}ތY y7Vz} BcY}  Cj1K;6o*Hk\sE܃rVy\ɚ*$;_`oӥLC4T EFk4RzIeL8s%vu!g8K`vuu瞗 6nԂG`XֶDi,S4D~ӛ5I۪3|/iY\]E<+UZQIB|3Fj}jt AGiLV5`=䄢 L;g!,oo)& jPi7 p!)T08%,Q8kmyF&r {}R#c';^P2o@~D7Gg,ɾGpT}Nf]YWE]=m ~*#]DVʺ}wG*F|$:WR;VM3csVS/\6I: N3'Q7G=9D촊h1jc6zU8Vv륲^ƞl/`2 m/?]Hr}V{=[Qkت-z" clAeC1eWB)eV;JC:wDWXs[ ]\J+BkP:>ҕc/[]!`ʋ+kU)th-PQrJ81t 07t Nh?QtQ7AЕ@Wr^ ?tɒ d1,)k|LHtk6 x m<b5L)-zy3˯/hl:c|1Q?=s;_A06&y=Y2-;Y.n% #_FABy,V(YQn{ҫgI/WNjHn>$h͝tQVt7Yndɹt<ZZd `r^EChEC(-3h@XpBv'Jer+f+6䫖Ayh0^ԕWLL~ɏ?|z\7F|3=M4A&Ht|t9~d biD1,p9RXJ9t&ڎ,},-@Ϛ$te+Kϧ+BiHWgHW, Bm-n9tEh- $gLYҕ`ӳD*y7lB7R}^,/mB`vPaDL}W86j iP%IgJ*k+drPkȈ tȍ;{#яV]P/?ťt&!M+|_Q'?#`m.,_S|ԑWv2jv=K}^ݡF^c#G$*4M{[s?_mcP.&Xм^NEY$D%V h-m&o~;a/ 6Mܲ'w6~W|a<>EĴ&ݹvo5Ӷl7e{ܧpV)..g!~,U  ~PJU3TըL(ɲ0(Dz$BBWV _P7ҕ!wQ(0l3ڗuz>]J;.#]Y-WRК{EJ;#]9V+s"B ˡ+uЫ &ӯ]ulO]uN 1\ 2Ta0+`"*eUo) Xj[@p-Eo@#k7JG s_d`튡+kL)tEhN5HWgHWwEU+B Qtut% 1ftE(HWHWJiK+a5BP`F:Қ}hEpEr7t"CK\0g+( \K+DQD([F:$ ɋ+KN'82XmYJWRp7ҕ3MM9tEp,mE`tE(ǵ/C7ؠt WB[%Zs腶MX6FVJH#[N N.k<rAU:|} WxZ}n([;htFͱCµ3: 6+\HsF4֚Cc Z鈟Oӄϐh\AtE.`"b2Rّΐ$8&JҮpr] ]&~btE( Jq)S~QF:C\+Dܔ]rftE(ոpter$\9kW+B+Pѣ kPX`'"BW ~PQ:GrKYR;VCWW]!Z wd ŒtЕ=pZ|ZoU'R:xS/wCkNd vC3h;Еׂj: 4+=0~UEʺV%!V$ˆQbKiXWAov5 6xno"l.ܑc!U>nG?%nyU-6[u}kW&z9r;#+!Twu%u﫤)JF[Ru|M?7-hP#fޠ%^\U"6R{v<c~6{]4Oe{x;>/Mi-!fw*'|sg:~ҏi G{xB@tO!a M[-os~o^NeR֎crJ&W{P*[lrƋQ I#cBF9}oo=}%ڎ~^.ꭼ`J յJ_|JZen j\A!/pYd9; ipH.h\62x$ejr=Lf7gN{xͭ9j lo~l7 B|v.%>o~uy?eNwx5iWQΛ|˩oitÞo__Q|mS$֥ȢTgW&lH*z 1Bv׊+a׹^'j8Ԃt3 :!K]5I$ c%b0ٺ'jVZvU= 79? sIzGS pF^]ZLh}Ҷ'$n#)oǨm?4Fer"xףMn=Y7Nnׅ'%^E VƯpk#fFTK&JU'"$ԢN\)T̨1- hkczRmkL.S9KoRԠPg$78O3cJo\؛d ȅ/ʅ{DԷ-iƗCڜE\ͮޯl\PllZጝu '˔`DvV*eƖe)'+6&T m-_{>r&Bm mj-pMFێYtI:TRf=3vopf찘ǖ9M:dmem{#k#8qkyȘ% * Q؅V _Y.L|6VB$ZѵH&&Vk5*QxUd5x3i>l۴\Nc"b #&2"̈02ȈxdPr}JkQۈB&p s[ƴc`QSʈ0m$II.k"3L2iF$[hIsi*0&78O3 : /ZgoRr,/y8'%Xj4  3aE;(zr@j<׵"-MtTɎ8>K:C3(lp[uz|#\ ,bz^ysQ-F*{jGp?O|;Ǚg7EjdWٌ+MzebAHZtb]G울| ]7+2`/\9#%䏉 {`+jh=[[C3u߷n_M)ڇ&8xl>'ol=lz:_Oχ"@YU1[q+W|#0.đlնGWA2`)V9WIcXXU# 5w|I'A`vdvpN0 'AIcQP)j,4:+ee RJS#-~.$!YP$zUk]Y͌0~ЀKGUM={}1YX D^ɋXk!P|&^q3٤d!O:6G6vc0.lmV8 (C[5-xO2O((y:pPKƮ'l KC޹Q&d`VEwn|:>f-FUb#2(ѯ BރTMeiM(r ɑ,CQ![.AZq&mv%GiffOOL# _gumj}g6s/2}=+宅GwԮcve# 6!AS@Y]Z#5X\4: FGQ\_$49mD6|{un7@Uh-OדDp.4 zȃד<Lr[v04v΂{ghM[VmԳf(T~l1˨pd+E-Y ZJ]MXjɍހSRR!n"KB/*M}qM7(h~i\\,t>%}ՑPgۯcpy>^XCgtVYNH'0oX/G&%^ zWiwեǤ$y :hr<\jCo.C ^561 eҩxmn H8jٻ^b sZfI9]E 0!낡(-1%2/UR[J/sJx[3q5VVїQ6̳bq5J\|'?ED[k٭BF$+jZz}1iP"(zxgA+ʛͦ.[Æ|uOb\[{7P썄 |ۧv&>Yuӭדg"qxST&iV`8}6AȽ;;#/d0ыcB T0䍲 O8_,(4= J2d *Q2:+RE%K ) HcU`Q8:Əae }‘O8!ƞre&i}j\unVq%tτJf^J49eNڑtN;H0qHȚik^1V6,D#!(t.\4.O&!B2^qBqQ=X+Fy6ETY/'e{/־yuVSqEY($)^plҳ'~Mod-:F&2lP;aiUfIHGۉ$Qnpt(\臋JWn_vW82Toe5n~n~)Loa_S'fz. SSe!14FB2pxhS}k%ZQEqL+ W=נR4(7 !nB 9B RG삕lK![ha Wm6OB{)dhSk "Z:))YhPRլ%4Y`mc2P$("_- 6g-"TQ_O`\V*S.$  bY٠2{˰k%'<Խ10rMWu?f-L6rJ`u&emeH> +CZ$rlq1o۶v0j,Üdr,$AfƵleȾ=8@*'*++4$-m YPtZ(v(\q9[gLJxRYglhg]Of>SߧZ p סF{TI{%)F"KBT Rۛ@&l`ޱGpptr^F[+` *0(ZiQ29F&F]=ji'q32GfW"H'zI"9 0BF0,;0je2.Ȑr/,<؛ւ]#[ .gh-VadB1gm&V yI HQY!Jk!ںˈlWgA:Sul|s^ #2:`*x@1l9,"eɀ&/D{<`٧.i#cb17W|I^~|}*Z Q P }ɎMVliWiηӗfkwPKF=4&@Mv<:u,UK^Bѷ&1ymq$=(alӥƢC`"5xlORƽ .N%|Ŕ#zpB MV{djЋd# LC_|<඙ ڢc}dVMf_RCpE)+h" ֋)A*}TsEPQA0Ad)E䜋E`GZc*]v&; W}byd?Sw͘5>ZrO1w2QŽz v}H͐0992jܜxu?ʍEJїo'](^/3nlA\J`7{ %Zgaq>mIFA*hDE,/ZCEQcfhAe.4'H}(_qyؒ.Ь%V~>O.#g$Ic)Mq 8J3(SyCoKu^+ОW`)RlBMe=57Mp͘U7[uGǮs 1_;b42+MkРC߄ulnYОѨũ=jsg4Z5#)c(L^)D'yfocQ rĐsw&D= >gR$oλ&ec\8ƒH$CWlX.V=,3rc;:yHBM`(QFm ah(kW* ].?`sy%]-=߂ ^)5wlYJ> :m-|[!Rd|߆SG*>?);N~]D+okWK :cMB U[Ev)92~`޳-niX3*Z;#F:veeF]ֻXF 'LnVrT)u>u+3r1긦F痱~8ˠ.pi_RV{԰N6h /NXѧ}w?}}z?}{?XϙEǓOw_~`hjho>rJw9 `\]Ny͸w͇m0; R\}{7Jc>hw{㣛hb$n}1R 3jgCҌĬT؉#bT a[FQH$b Ϟl/yG_|M<R0lrd2Xeʢd </$a@٥~O=QL'^ cvv^<ӭIR/Iَv6 o0^7QV֫>Q~ԀyB^XѼ p"Iib1G9ysP5/[tvlDM4ܮQ!hP,+cYCVZGMcpQZdNɥdffS^Pcp[($!)ϑt+L۰y*n |zW:4mp]c>7[fe.4UdzEmLq.nz7>T։Fp=>Su= ^x2/ܘh..2 r+87ġ!jw3;hm (]ʏ9Gg*]y>,+,jmݽmU+|PZ0Էo%.:?FrRjю*?0Qb| UE .N6ӹ24e7VS!Pl1pB`0$BicpF{Ĕ{F8 #.jCO o@ NY.C9:CdeFG|pH)=A Lf]N7G3 o~]q|-*T!?6rxfDv~@R2d³1Ke9˘5L4# GhRIx!t~{H==|nÁm*R GQ@*6!+TGQk"Mt=4`zDjOAPe0t7XNX{.m=U7?ǓI9Sp񑞽D>̾fpy=ga|.' |yTVv=켊%rγC wU@!9S ZWԨ J@-MUqˊ0"h91?`X6i<wX]~ 6ԭe)]ypr7CP#gcv}9;C~ IY)j!t$C%ih P|=R\x5!-TjmNYr?1kپ]BRkiyAkz=R9ݲ".)jQ]eX֏v 4E1\H0x?NQb8X RTҧwU?fCt-/u@{,șS2gb0 g]{ArTJ|B_Fwr76fsH:J(!D|:nT'OxQ LηU=cdqv̿} #/n 6>Qox_k3Tm|%b7Pf|_ *,ň2 XۄoL$ l%Qeq_(]F\Lfدumkim~$a&ph| 59OۅuJ0UX\[g@8W/?/g֏UeFz&l i۴w9ч%wL:Em'cq ȘP- 3uՑ!{V3Gn%](dzq}= R}{qyCt{ʠLyM_JM9tw;+_T4 zZ[nfח'a޿arE'DThB16Dg.jk<2gAP0MglXL-FtZ??Ά:iw(Ƴ'e#D -d>9c12#ZL9KxOzj`ȗEF5&ZD( څyP˜;Cw(򎦌CYlۺo >DU +p0KĤ }>B} X*7z8 ڻ. |J CN(Md!&ɚ`cl*tD}2@M*Sϸ(b T˟Ahda k*-6,cǕDB*ˆշWTEDu&R$5+"%"%})!Km0%3tp ]!ZeNWmOb3t'٧."=] ]i%bP+E$ ]!,-%=] ]ܣ]`ա eY "ZNWNvi1]+@k=%3=]}6tewz{FkInxw탙E?|1UvϠOX(mx^ȼ~]$kѿSjPu2/ehGMs6jIЫXX-EaN&˄Sݡ ֶ3+KʊW4R~Es+*m=WCQi)ܥ{ _Jl8wKLA_˶E=+w]6n-r sf(w 4a|p#`U.c[ #Aޝ{w`oiKDl4JP_HrKu*y oJ-zJJmiHUCv䷲㸞^' SQJki-`NB1ڷ_m4Z1NX7 N?@w3lbqvc4Y[lڃHڻa[p/LtDL5MgL5kiWL5@+)m(M4ӢS54EWW5wE?5N4'63tpmg=YdF)ҕ.Lu+,-okFD{b.ٻBJu!mekݱ7w˷x1ZF~([O=tup+mrm# ܵ: Iaf.9UJ.gJpA4tVe9]᝱(݀hm݀(v Lhւ*~{լ+th-o;]JNdOW'HW\JX f=] ] +lc]p;CW֒[WҞN`+̅ ]!\MBWֶ޺qutЦKtY \ECWlYrDW^pK{v]|}; 2_ #w+l|!o@uT?qгh'5n>ä(H]9&Յ1a s&MNTy%2@pOͮLkH !k{ 2;x(84iq=peG!fH&{ &:YԆ+ᕗAF*]=D- j'zM  mL2YP0R5ʅC}Gͦ >L5) A,9FAYV1S)R  l#Za %$ZK 4b6}  (1ՙYJ{65Z.Ud u;j.,eq$-"2d*KJ&-LTDH$uw1KAJ`n`Sf1 BpXуE'"dl9t {g42 +^QAd5(4B}ٻ8$+^%%F,7_fe jsDwֶ͋&Kb;$@T7œ'NdD=hÍ`,h2 ѡL6(OiG cYlnc0j!ShwUa}px͉# A> 4J@~/_\bjSEKG:SAd*X  $J!wӧNgUA2țk#8s#ԬF[ ރF%k >Úݜ|OZ#vR=cGZG7H}m fQUޅ6t&$7'XK!Q,3I buRƗ ZRv|^YbHuEQCBNFG6Tق1K(#m'Xg dGh]udV@N`Nڨ/ E*UFG>rN rSNcG3PQCm>+h-a[۾[qCa0P߸+"Hmbu2Bsk(|-b"n$l ]K ݑixu dFCVj;xX8CU>fj'XҼ1 [(> sAEF@n7((ʡ7f KV#5S":J(k! e 1P #qҰI&}EՊX{gEM-dPgF\HK^ܠE{ߋqJb6(!9Yh>3(RAUvNZ'$̿2COvߙxc މ5]xyW ޫa}Q7ue2jhX 3 b{PT8xi7@.AϛБW%sJٷUe ìcJ^b= eDA!vAy1k"! 5y]%@_!80f;@]Ii(ʠv?a5Kq[S-Ah[Ry !zbA jhjwXbU3秕ыA(Ph0%/@Gm  tf% Ix2$?< B#jw7ygjތXT&*b,zP>qVH blSN,UhE|MռB:kϢ;;kFFP[TJЎo5{%v iorC6ehm6ZY)1ZoS@n`{blo|ܦ:[]{\dLJ7 Pw]< pff=+[{ XQ- Ϊ4QG]Gj^kIQ3󬑲FhQAi7_ @zVF6k80)QC^"$u-ҐQUr#a?EtNJdt `%.(H H,@z|!(!=:T}f=. >,`E^1H"^S:Ԇ&@u#g]䍡"UA  ߡ0ˢ#ɡb$UcbY/xp`\Sclci0Ih j hN\76xk+fnQ[Hk֬UVm(|Ϥ@LQX BVcږV#ϻG&&KoZi.!@6(=i ֚&-m(-\h䋑Q+Bjs@S AQ%#p45PzB%aPJ↑$57oh JlnT *6˥11DL0r";@jҙR+>ClM](Ijl$dipM(]*?#t=AyG-F(GT1vob-f+fo٨fTP>L|ba>5^O6{-3ynådN>%*2Â@ 8,gp杆Ou.~(M':<'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qB nIN \Hq!xN 8 H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nu!_9һ878# :N'PH'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q(`oB;8 x7H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 tj| 8QBWkUB̂ QA.n:e" ]Oz\iO#(OJW%4tJ`te+#t;^[|&߬x9:fq$OVT|gCYbu>W/ޫjBjo-4/ߜgiW:V cz:wm{k)B4*4^hkX:{ɿvjF~L'W'W޼5ѻX]/J?~Lt3UmJTc4GEo"-%DY7:hDU|L =>jV)Yx3~dyWHCM2QN~JcG(Л۟q֓'N)j?W~n1֩ f1,KDӌn*4Ehښ[]2m.bv1Ѻ+F.r bq9nJK+ :]1J+E1ҕ6%pZbv1ڏ+Fҕ|Ƃ Y]1\ZQn-_ ]]R!GZQR&}tI ]#]E"T bn1ƃ+Fd*QqICW 7кϽt(I֮:te?qs<S~podmx"Vzte++t&mmc<֢{뾾]1\E;+Fka]!]̙7*b:]1JCBWGHWunAtŀr z)thcv|%Z ]1ڐecS ba9tpSX ]^>xu((BW_b'?e?a1m:xui놀U ̒620CW w9 h玆A/CWCO[;괔YkWT9emm:ē;wscrr"Oo梌ܱn?HIFv9gEqHo=fxuب׫_7wcݰ1b "0Bh/]kFCf@5KΕ#R#PGӻflye'ChC~(=0:Cںmb _@7Mc{Ʌߜ_&9e_>ߗ/_b-O7\ ǧnqR" c,v{swW؛{&w96_\u?=R{*SC˵AؿB}{ lɨ˳iuEN;vG*$P[Qw-mI vs~w A86~\SCCVW=3$؊{{ArIup̆`%k"Ǘ^4)y>MrS7˯@Q #C8LHM@x؄ A2nqVi(,<a8z<&5g&'1E._ mm~qb0Du ,vU ZOZ7떛>WEXdy^j:Y^SmL~4Ѻ_drXR`k#\rk&V`["w.R^~GpWf>L ;>Vf κ}2G5d1輁LH H('Fc%%s z$A2K1tuxɲxcϡ|:R`R1 Tfi%b!@$*C EQ^-$[z j݂.E*{(S&$`F(.jwbgau;qS-MR41ͬe, B4fӄx:`eE>t~a~9.̊ohGfDt1X16489 B{oघ 3LR(a5-$פΫq\~1"bc4nu乲ƽO|rY~|1Z,f%d2uf23pV9t%\; Βd9~m62/>d3]Z  'M^]'9 ϦcPtY& `@:)"RLeAx97e $0! `:Y.|AnPfMy渨AXԮrq&1|HUl_~kګpq9嘧sp1)ZCJEV m~[:1#/60*0ymճ+3_+Ou7UbǗŞ);{%hAv5VƔ ^󰒑1t^m.ːMcpFY5[6g v4*'Y76.̍g#`ǿ:xoޤ߾׷?s L뿽x+Xq3A ?DyC׺UT#Q7~D 9~o~+cvKn%@g/?*Mlº[ff26&X.|W # ]DEoPA6]J(r6 ʵ>OT߶Pi2U@zȾm_쵏ĝl$6rR) +i=$&a:4U(`[$>"@Qsr6Sr@Bv] ĖehuXWeqnڻZ2gZ=RNaPUF& ?P3fc',*̤tɦ2 p!t>4X | FbaMmoCR+&βAOfw&r?|%G?+bR[}&Y#}~Tmx Fք^z8i|Kٮ0?Hg [1,ha;w}p 1a1<Z$\'a|3Q6GC920HQJdž7Fx"t20^eQb ;ed[*h jwlJa^i ?g)Oc.*5O*bP&[S e#n~ܟGˉt YڢPd "PAm \iAxBo}1Pt D.~%7cfز V5u7r}AFr8AC-m2ٞj'Ɔ_韯ǥkռwkX[ڱV{=!gBZX-]}#d=uvwB]XLLj9ԫM"8tV'ֺqJnս&m ,wǧ>RT<*)𮙶6;b?AgzvqPok6L`^veKbOz~0rX )`!\:B&EV`"@㣤pg}B%T8wcB"p$J2bԊwDðq}sXר boPg I:Z2 ;Uý)SgVUIGxuZ6KeglgJuzX}gߝ$LTlt's+bn蹠L1Qz&j{{eG2QEK&Ž|HoQ5^ (fVP8&; {N^ѭhreqc*h0-m=3{ m#( {Fs6z%3j}d ȎkS>Q-wo<9 di@vԱ7ms«τW$۟«(kNb{ AYdy¤9c\{Sf0M;T=AT}P5u iAh. sb'V2@g [t0HN3GK iKh|KC5Ä́nYvƙ_pL` 'LSq'tLඨ._r)BKZ_KJ&O/ /_Z(z/(?"ί>` GG,Zd ;l|8W>V3Dz_R~AJ2a &*`]:1pa'HJ#)V(GC$$G^a AHR@[XIDiDk4Pǭ92͟\,Ig܎ 3tv}8ɦZu5mzI_.2OlQi[7);ȽiǬ:0do:^ L(Cjv^ioUwu3cxqG;owꁆhE5>9Q9nz͚ᯪ{uGuҝ8b_;wV F[}?{&2#ǵ&VycYʔb>8œ(d '!R#Z^(V,UnIPK9Ya,Ƣb TRF&%-h|'(!r'pR ϠyWXhk%V #2K1q!Q A$IƱ~a݆2{,8$0t;-=L*x"{K}($MM@AwqKHR2M/VlqPn,<㫶Jea7&f$1AGen s|٤eJ\jμӯce'"e|5qb 1/3FS-ZY8"8Faac8MEN>^\wmI6dl~ ~T[JdɣGlg}/IE")5/}ﭪ>nթ=/Fу_A~Pfg4:ľ0+I.]%@HtzM^T co)ulrmJnۋ(_u$ܗc(·m >Vp7hzozuFI"vY0{jP<pN ^D5FN>^bbTկֲ}lyFPuF9%"sre B2Q#c2|$xҎa4_ &Q|I8Ty6k#xė3 kTF(+6[j:$}*7ѯ+9I|y|\/[tO<T|z%e  FY eIP|ޣނ|m뇄/|@$vd:O5{؝!I,ً }rl쓧+=B;*a@:Ej|ENoxI Oa?`heu^4X4&8Ib IJGQETf$K"RЕWC1YY@ʍj4Y>X$\[QAG 9L&(oF9F(҈>ΤS–g҅<5p:GLO?صUgdhw? v:GiػGE+f/F%kd:pʐ X4Ydy2ZjT0* >ꖄ}o{xk)g YEULaO & k #Q2YMHF|<ؒ䏵m"fl=D%4LA%";S m49:'0NDZjGwcY4]ܝd_:f\fqyPBMR mt*`2xjtJ$آ4ϒ4:t[F\bhou`Tz(aBrQ@K([iu@(ZGfYȵVi YơwcXJʍuEKZئz2oӓ˫-ʰc:".CfU!*Th\4I K:v Z1*j% BvgW E' tw$ړ#v嬎,/P}ͬcè8ET?Gׂ`%8Ez)DI@i )3ql][cj*b )bEPq$U0Ys1Y$&Bh-gu<I\A}cӈ(GD#b1a#vHe1f6zFpfBX/794"d@K_:;PƙcB 4aJ̤8u3^嬎oux9ܧ\g3+4.qQϩb%bAhyP!ȡUI)d)\ѱC$C6Vֱi<ԍB[a+o"2xXjZF1jQ׋wSѨU87Ac'ʐqʀv8d:.{v)˜w^wTȣAN~+>>\Đ>c?ywoK5]6m(ւʁ)@EK nA"u1%Y$0"Rآ/q[,l8nK QIv=ƨ  W\$Y{zQ{PC(3L6Gv|$9pss ~LL!@)̎(2 }s AAn$[ց})oz@fZl8f7b;7$*dw{޵LJ ,٢rƠ`I@}RΕ@Qh/|~O[)Q]_-"iM>F/W)/"Zs4[CmJ&:k^y=Rp[90l/)>JviWd\b3yT҇$IDaM0Ƃٯsdxp7U))6U#T{)iSf.:m7c%h^H6d|N+$6%U DE𺱟5[Κ~6fj:`C0ϻ2P`b) N>AT;*s$i@J] R׈;$1[fGJ Cru``lQGh{ȈF˨CvP+]vy3Jf?E CORAqbP^dtah!R:f!m"J+smYi=p4 {5wMlXulQ錙E5!X:53 *dQ"ˤM`(VƜˈڮ/hFB3, X=ЊXKR %cU'̩JDG#2ye@Dݣ{܀djf!zl.kL^_֍5a|0o+D-@$b6%wg|OҗtLa|oH0\]|8m1ggdq(Z;CBTWLRGWKOmئzIYt1 e N껷6jf[n}P1MΧ<HSM>Ww3"q?{w_v~;;xcյ}rzqW_q 15Q+,lvJ;2Ql Rݺ)o}uP1{@aq zZ٤r/&w(Շv2A V!QQHFoc0Izi#hRdt_rGEd〕KVƁʚB l4@9iUpɲfY,xQe^ 5G>r[s:g#ȸ yAK$kzgW}PaOz \?ۊ-bCMɨ`@Rg Sİqz{!0gg3g&,eEy b:ovo,/0OURFN3K0`ljxG|/B4dSX/soq/R: 4TQƽ#{Z90^>~>K9?0QX}AsGm:q1ET+]cA"8" ਊY]^0Q e>y򷀷7CmƻXZ[| 隝csߍ OӘA!f!;Z?|HfP%0[( cK`ouJ܈oQߋQ"QLhd=jǬGbBCqRM|0ʶ[rZD'i';̦Ȭ. VJyW,Лd-6|) b]%))B4+]R6$@RVɨ1mKS"2l䥴HC@&d`RX(H2NUh]l9=^,+b9vCSLUhQj=zy7[KL!fNt~f fE&w4UksG?+.|)N{~I `θ-3^ߟiEΫJ 7FWʬ +H7ƍrn(w)e;fƫ_b0F>vz-ﴺ"\o[5vStfy4}iwcydogG~e?0Q󣓩 8r|L4T_s 3~Q7N'ϯ/nZr\ǿ?%3ѕ\WJs]i+u4=+u?Js]i+u4ו\W<+u4ו\4ו\WJs]i+ڸ5M@XP Օ4pIo}m}~dpe[vkyQ/u\8^uNJoW8ٻ6vWo 4z/ /Nd9q~;\],_V-' q!>C<6f"cW9֕O FcuсϰBbE/37xQG0& [səv$_ƒm#y.Ww@*Ņ|,+&2p!kYkC p_ ˸eK;^43ǫ+OL׌%|ǍM^RS1}lZCr;L'#28U\44[Ɵ\t-I_&29 ch*FیeΉ$B Gu;<`D( T [WyFH'yL$B A\H<:WMv+˽l|>Ӽ$?B&!ޒ*$VHUnsQ+5 ^`133O732%mԑ.:fG6Kcf ٮۮXV+ـwOjjJq[1x|ɫ -LyCZq9V<"'S7ZN02,SPIYx0"y&oB s:eK0%4N1A*K`pE $ZGz-JI(G6FxE:AWWhXAQ~tV<-.]5m:O)4ssqͲJ3mALf1 KV6'9ڊN{ b9aLԮ90X9  i!$-UYv({zl&M4Ns%beT2Xq ,xKާ2ÝbB م~oơϵ5  $Y>'DQ ^T1&dO ALB'i ys 1se )HEopJ"8x0$DKfn P7EʔS\hoi6ꜥYNhyJU%к9OX687f  ?&?vJ0\>(O?${ʙO]|.[2ʁ }K&NɹRrN|]Kݜ3R'iųOK+ͪrKNL`o+3+q 05'2xV\\'>84B7S7J)Ѽ?/[h7oNgWU\+d: ;&?>R V]^>qAG¨7Nh~m~?yzv:0{D'5_ s y4v?R(qfm"p"MoM/Ohu4lZZءibwhX]niqƘՖ, )?~v:CݻYfAon`͙E[nvޕmI#pŕXջێ\uըjԧ7ͻ uc搸d /Dce#R%;g<9s٣,{az.r_͠.3]^I9^zRI$/qTE U2+ B0P=T=RZܜ21)# hɑhY$Ȗm F S=QN'D9;;I\Uݎ0=7μʿźǙπ3iZ߂˻ࣇ/155|$a/H+/H)yϗjC}yKp,ʙ\ပ>VI)TBPe>`rF2im< -O-#C6&0mC:MJ>%̒.% ,ݞ̑ޑt A;;gGE2Ϧ7kxXp7U\lm6I|k-tDwPjKo_g*nuCm,6bRb޽լrų(Z]nmsեO>!1o$mn_JWқ[*]?ͣTH%+*Y-fn5^X١浒t2P>2 m}_R"-فzjj.,K{ k,Tw)smIsb c6A0ݺd fS:# ;X26,mFO@oW|n3| ShhKX :[mLm[xἍ) 4C^@1Dbz!8 #Mk+|*"C.}?nJ1F5s~W^皐|؍Rk2^t~.Q9\4:1MJ`,VB'4L%#ӈ** MQJ zLqxrEڒP&(Al<(Q=Sl%u,fޅ-kvWp(`hPOE][vC.gw!;K](VA~g߻0[mUB0x`2\# #]I!&)dVZ)l{5t5 QN1S2&je%䄵T  ;3"7\q}=-N>_'4h'L`7R>,p;ђ93h)`0hMo[<٠!+|3g~rWђ P)u>-p |M$Mn=,Lg;yKx02N~ozvG`߷7yw;B2AzlgdOt<}y19n9^6c:hK~9cH8X]2-IZ&tq`ܺ{9pOj:.Xi˲ h"^vZUFe3 [ϧ"N#cTͲ'ϝn-"D9Ⱦ [nyx~y %/6"z53.D3>gQ-g170ã(+L?X>2tt Ex;? -H*I@kقQȘ}CZ҅ZƬ*SJ +N^!WG'gur.oH!.jV@v!b"i!I \h kˮ+q/t>m=4| g͓c9?2ȌN hT:8Thd;˱0ɨQ1((B+B.D@N-CAv]1fDfMTO %١>E3xZzLJkؙ85cwX3]x.x!:.ȸxpGZS^:?: E p˔0*Uʾ]%EBrg,,n:N.@P=B66AEɶce.#2 ;=eWl85XvgcON+܅7? i?{Ƒ@8`78ԯ8؍!",_CQGȑEc5jI5X-6!DFC qPٹ@͸jHe!hE{bMK#b.$ģQ-ak܏Q_Ǡf`DlM?ED2";D3 *$%Ͻ+,Q$n (F7m BhG$A 2θQTw-;zkx:$:".h+lsFμktxvnv 6u1`g([o~aǹSN(J7\k^c,ܙ_q _=J*JJ.h]#Rm74E2tA6[?R5lO,pnvO*O`A/JT~>,ֺu|4`d p )~{w 'Kf?=P0~!30 wތJC0S;Y\*wo;Lg)`iN NM  \eq>9\e)e&*ow5*p\8%˹'ϯvD{}|дƎR0UzD)U4 7,cs/K7|Y+gaz/?ET^9bf7+™KhCtBiA="Ds,4Q(SEs)Ѥf&]V4;aMi7vk\N۳wSg$a?{i'6C֦=MOхX]Q:櫳88Ƹ(l"[HYEqNE z7Q|\O;{m24ع|Y`SorR$Yj-)?.f:ʷ~ۏ@lP]e|:v)4$W=ചo{كSgm)9_ $Z. ;we$-g-79lQr= L23Ҋ\pmU> 7 Sw %Ѵ2yYw}fVgD{Ò pQA$[Dj!*P@RJx"AiW҃]ʙ}۟P7pr3Ż'vKbg,:AfYi"hA2BUysy)DS@N&Ui~%98xx8@J8j=!٘w<剪q97j.n~Yq֠8L,5i,m"g{^f%`?~_Ms.8O FW4jWk4^SEċ"1["΅9{}8-7W})Pۏ炐7- XYS3Uc3maO؃,>/_;Mx9͡UFn~ɦVU-ټq.1T/}106>-:q롨wG%.i0!mNLI I"HgQx1"$ rD&k/8.DxP &eFRIk:fD5qV 9K?_nElԊ^ND`Uvke&P xAc1F`$S5qg3\ަՋ5#~|\xI  ChмtʼnlDuD";.@di"U@ a0\Ӊ'è@J.(3'-UBɅzDy!: -]w]/fMe}vvrhӲݤ^B6w]zM__̅Cj<i=l[o^hE-Nal:Cj9Fnzz^6yvC+-7xgȅGy-myM+O!bhtkfqxc՟纷k[vPLU˭͟6J&+ZaZTN-X{E?|tXK6#l@;2\@͢^KeT)U =P P1)[aR)$a]Ft0fR:% [Kyͬ~II$LNjIEPcV`y˕Feq1 ʤJGa%="Xq =0<0$$)T3p\= FgM FJ D+ Aw#]rR^VJwkKT+|j <+lR8hNe )f䜖,'0[4:3ާzBO<HBíI)2C(K`0t+'-ȑ12ha#O 4ʸw#zA#'@`BCV 1` HA(Ͳ"#ڑm`NH0!tմ~+eN-q(dhP"E-S.gw!+ ^kt :}f:#* ;ңdeYLsIiga| D:u&E? uJ gl9o׎Rk@R,"!]\˭u0rR|Agq $/ uhEF2ϚF 5ïoI&WE ?qASd?"ʣP;Pšfy0Vj¬/;_N" FTTlL5293iz Ml?m6~֗=l:{[Zv~3:5@F{w 7GJq6(7N ăK/?ϖ.l0LU|r{ Jz@풜(k ~NB!i[Ns^'vCΝǣAeۏ/xɖq gO?/7yQ{dV6KLgdyΩ┹Zuכ*+wd!^?C>Z-׫ƫ+̓}.GD-Fߏs;8cι)|# A ) =PhO/Տ.885nӂeF9kF[A{)llrH){4wp]zŵkk͹yx;-onMZFxBтAg3MW&v(iwnƍdqS̫}_u"/ͼ__[5-gO߼~%Ýw||?Axà*j1aC6L0g8>[\>˳ю￝i|+O '^v:JquVz>lzs??t~\vvU\R}9;a?vz㫯78YKZ*p~zvuʜ:Y~<ɵaP}k׊C ŏ ho_wT'ǪͿGG^SujC!SsSd^0X#ӭGmlnたt>.&VYEخEB2qk^N;/:6闝@q.ܵC]{]{_Y-BX]?HAd +|LKoCށ  Ak ἂD,_ewuzdns#KQ =HeP`$]vJ$dBFǣ@*]Ң;ǘ \8"欵=U2c~7lc~VwklYV/tڴyc'0cCe:>}J*Hѭ_-/mj^VPo~84lt;3]WUYg?L Ǔ l~5&5uYM]to&JӃH;h )˔/8++E2S|)4jKEϚ)8"_}lr&<;i3F<{WMf:P?P ]5Jb}v׾ky>5s &Plo-d=\UC4m'ݙgNG?9M.'i 't7+f;pP2O' Vh\/'˭dс}^J}b75_9>lC0d8PYˆf<78dv~q展x%TMOg˲dVd7کȕR!!d ҒՈDֵ#>Q,2+})yf1qI(24:9eHNLWݲml3n=0]$45>Ňun8ņԢsLh/- 03(}L;!3i)cbB׾l^:U6-6,O{oˁH䱐r,9FD!!1FjÈLf7<*oHB:G85Ğۆ5 xJ3": d# 4$Ռ,ļ ^,Xã&jB6R국ފv9aFhb }r"= {\L3$$o~RH~(s<& -{nE㉼cѰ_G%3&v`+r uсO)uT5SIY8 }ݛ?wC }_ Ys7ҥ3пJte_C#<~=oYR+a)źmf3u~60bVWhU;x1Oy qܯax-jEuԎ+ZV:JEPA7`dcnMpnʹ ~3mBD{3;VPA86l|? 3 *fP9 :\`D$4Dqޛ(`+9d` aaR4U+ύrXcaF*LSb S=!"=A.-RR%n09 Q kɣk{m8qB| 7P7۲<OOx?3gZd?Kӊ:J'=Auט1wkf=GTgV7r{LUr;i+jU-m2kZ]9r DŽ20GT#byWϽ "`WFD3+Cna Űl-y̫l+t\_ĭ2鬍3(rtlꍠFx$S&3 O X`T9F@;,DDꥦ0* X0N׬?g[eFָǧK#HoCyo/t>7IwfE;|KOخog^!PH\iWDi.0XLBLus)c`KRZGӘG"h@QAsg 86*GT KIwTѡ]3k{oԳq< &q]c׻}g|6,`v,dY -895_[#Gd6seθR,$o+$OcI2r*<)`C)J=Ʒ|QOfj.3ٍ_WxD<ڕ{j14cR96U@yM>jb%g͍kje o5d9$  ӛR{4r%5acHݐhr,hc|\ɚ⫝̸*} ֭;UVeL5\ABW "NW V-]CҔ+FDWXcJpIcN%*IoĞ]/U˳!5tu\zfc0La(,Еh]/[I۳)<.V%cHN  5[+qd~Ez>|WD}4-W2:wb-G {*kL~*IOK|wUyۼZn$-e78Ŝ9s˭#VLbePWLjRo%pAa?zMQ2ۂ1)^цۡxPSÏZ(ڜ-rm|өB#ɣzhi{vky'!B4ȢX4 .MhSf:ޢ ֢y %T,`V\V)g((OJqɾ[>(=f5Yja$',Cid_w`2K597<00 \3)F-~u 4H7\"B NӀR ;iʔM+UQS**ԢHWJSNmvRM^gC3'%tqyC/y k8Ddld`kCRxtD1, > 0!PqPQ9ִ1 p9jfBKȥssBI[=r3UTc 2BW -x7JB1m]%9/e7ZroBXKWH5ic*Mh;JڅHWJtqA\֘\G=ҕw01\MBW_J(oꛡ+g&'>OkPp~^UHz@JAW'L1>y\r0La(W_)?e;1ZuN4âdk"9msN?RsDQji+1k M'\:M'4i6DW =KM^<]%t*E`²1tRJh9tJ(hS5B1\ʛBW NW -]C7'>` *Jh/L(WNpt~JhFEW W6f R/J[zt%TH7;@W7fgjrUByiZzIҤX5G 5SKtPt ѕڳ5BT8 >m4e~WgP TЕj]Az6gp3 @6*+X^-9_e.xzYĊAy,˱t?xԯDF%IŰCED.t`4d%E*Jh:g5>,&߮~]G_U1 ޤ}72n4;¯ ƌqV]q a>l>Iqe5´8OG|嶃z+YF\tt,ֳb|SV>ѳ.3t}(Ô7ޗO=_#\q JIL6!_`>00ڹY,ѯeKxV/ C$e^.Kx1ur~/kBi1Q]vŴ0>--Jx69M?~MW> |-u&a\Oe^aUVfENnsRAetb!V27DG$5g 楦vwS󔎺,9h܇ pѡcB)OaGs+:2,"Z)"V"LY$H2k?gdXzhѝ Eh<ho\ζv,tZ0.Td~~S٫2vh&E>gɞW|'5uepm&\7@߬*P` {92ZPƽ06]|5ReqkrD\c(jjzvAPtہ]prK؋|tNuzYf` ?3d6Y(O`]VVz~T&O׋BA*RK&2Ϣ}u'yђߒ6N>b4NMܠzگ=r;2I>jV, &7aYS0KKqݙ1gvaoGeu1_yG581ΕY^H^S/2K :*`2f1Bi$%5J^h|^z)C0!B `eZ)Z$"p4zl5q+%f68RqXpۦ﹢^aFO ^V/NV6~ +?'>O`}oGI YWY2OhMZw& ]>4Iz9_ b9!uY:~9{hb#E~Ͼܚe2y%gR>yxw9|YF nc>Qv~mxVȸ 4dxܷɰ%ϓ҆ޜ-Ӽ}e>>W/;$}rkms`G[GVߠ/6s81['n*vMdp ԣ,\hG 3)npfr̙C:<Ncʠe-ژLd%Z@rFϵ,wJÃ&c:m'hwѳ &uD9hkcM0&q4~=<`^u]?b>E*|o+Rr5l3iQ^5:ٯ}~_g7̜ 5Dz6o7/2#ǵ&VycQ2ʝ(Yo<7N8) B0@G"`"M8OQ4`pHy68{n^q9LڏOtAu1ݲP%<'t// z7f={l}6oLzu h|]Kv2&ȁنCdVX]E )g iwFIK!(b'a$˕cF1`k id&G NI%N.Bq-A@vDzh1{&CDdzY{))08)wK9 Q F[PXo[ݾԥc]wmnsn%w@[ͪ*U%zg0t}K=[eoy(Rx, &  DRT"p@L#U0 Z橅yn\?IKAQ2K4P`[, Q*@ ŬV mԯoZlu(I nˮgw{&}OJ< xQfqJZ?0fμoOsl㬃GWݗ;iY1ʹ?Y8,gN6-~ ᓐ{i$9Mt."ToS"&9b}ÿtvd ч[e'z9P6o}>HUI~Ľy)tޤ3c+}=Of~ +q;lg&5MKg>2=św1TzW/#F - dҺh6Q zh0Ao W/fxCϷ]ep0:9tS{=9ҿN6Yp2>o?WnЩ?Uzj^1&u;L7^Lo? ߾]]8xr9?]N 巋^_ct fhm/b Ý"FזTΣ^: `!_iԃТ!7,s&s6f/jl\jJ8b{tuo] iJ_10c[&1eg]~^o9v=m`8:h{p$5m!|2h7c\ILLzּܾwff cdNP3rNV/!/6 ϊSHp ˊ uǵ6R,. #^8œr&;jmγƵsy|6ۍx!ؾfyFiX3Uq:9C)eO0Y0N"ގ\;ZwPIоS,1: <9OFuUgʿ$ 9",J d0cK)gTfd#b_Od_OdG(0L[06)r/pVZ!CE83LH9q>pױLGC4)n[EĖPƽuH 6Bs4fȳ%ˏ |Z3:{{OΝ:}TSCI:I bk$Ϗˑg2G!hW2T1ek"L,WBٮWwnDK~<8JU6>h΂N(UqQ tL #Dgr%˕,Qٻt6AO Gq0s* 1XFb@?in'*oDtcy]CR$ j@KPa!`Ql<)r595mrTمCl|+;S2)_ɠ_2eV\*}JFJ@X$c +5uh}d>ZRB{RFL@6oW12#XP1g&ʭ^3*ta6b{4`r/f[˱nP^owv`JK"RUe Ѥ(ĠBY@r;0XXXf>2FÐ= JڤB:lNl;t$L0|뻗`Ezm}?cbj9 bZ t .p#âR؀ņGHa3`nyS90?Wʬ0r!*Y3 t r#(`#FudևY+ bHqW4b6T#5^#d%O|J9Q<0=WJK HGnIך{שHAB2g($^2:ł4(&LR0̥ujr^#~9 0 ޥl\^$"Ž^\R $ QiA-Esc Lzm% ׋Ћ;sqǦfևroA9RQȵ1* n@ޏ{?e(+-^hwuDD+/ ҺdRR#Kh -v6AEj9 LN?gd7r ^HN$A^ -693>dA %p(jGI*N ԖinE J)ıH1Wap6r:׉S:n&GwH#1ǚ(9sSPW_r^Y[CD烰thAe,jVۈ S F[  rE؄ 0e ȄQK 3A@Gm4f#MK9S\P9,5:b*-mif`q/m;؄p(\([ [U RYU4c,ȔHEl`+o"XZ-*71eW]1FEX6R r'O1,FZJ@(&hиAKOIZAeldl\:0#'.C[i[pf~;%Wہ3D R.>˛yRgmqZcBɕeNF]$fМr,Qd eJ+@j$TFy7ц[xFQEPBRXl$` iZ D(%1w 1F: lp6HF'g$<`F%@U6w e.r:SHG怍+jF={4Kd0ST*`</1RXxt8=*| T( #fN0ST6f Ƙcd4IKZԖR2|%gS|҄IwIb)vҶn+MQƸr69-.հq{K+QD[@Xby]H 4mk\7=wjL)Y )|_|9 b*P9a+=,zn?_K"";E\֙joro7e-z`eZ!A*EJ6:7~BU`0,,?X|_qTЦX4A])Dy[ qNKm+w4 Xg^toQtY%/FiF`봬20a^PքR`bf tP[q?w X-_MbfaSm@r7 # χ2w-]C{db~Yl:K}t^\^[=!poqIS}RZ}@ q1L&$qTћ ըcj%P8OE #?|Scq2,(pX\W fP| fB?\>jޭqqht~@EW;Dž* :]oJ2R^b:wH2~:!$~Z%IZZ2fq͚y#^~9Wj}.)ǤbzH1b`0)9t#`ԁM-V'C-ީu!kǕ^qهZO04d8oҷ LM\,ӸԎjz-n>XmMKsi($ Vx+ʤ7R&KǫW0mvD<& K%Xdh%Ly6LY' ;1m.*di>~S3b* ]$ \`\p5C7],V|Lmjͷܧ Xsc Jܖ6:Uo>Rנ|q+v=UCTҮmbBUwT ;rj6cg qbc !/3"q%n$RFx;c4ݚo5Wϣ|2ClK*Y0LYCJS}2+BZ.87"VfN;\ rQk,lHɛK/u3"')1I S"!&"Ŝ(aHl\8J1Ϗ!`y!(D#AN*ڞ\F{ϙH>/q+\:XVDQ:.381 pf&`̝Ya*^xoH͚TnI&k2}]C\۽9J"[(n)D\]&p8ctt\!Vir uHP ˽4봓Kшm)F7ls:jZFP <1 eĔ nj6>Heq|K*BXYL^jʈhA #`Ҁ2tfyFָrpr7}4}n <Ηx: {YD9'b` :p_f"PГ_`Gw"Xθ oOo@/Fn~˻Qvmq܍ʙГq\T DbYr&otp\l .TYABZ@/`*Ih-mw=ay&р{W/Fɹ!vW:.^חWi&͝7Չ{vڻE} ).Fq|ŋׯC^O;HutX7є*Co>ɶFXUizߦk_)W$qJY,Ii["v6V&][o[9+z,xЛ^`ez,xe%xtXٲàt|(*~U,~B=~U}@QW./Gju`ݠaNxF4w_KIgqظriAC *ku*H(+Wh9:ɀ6r|%y|tP{h[".G7Tw4PQނKx֝tVd^83l) keD@+y2I+0%1;*d+!(04#w@^1+d}:#eO(*Tґ`ЇrMa\Um-ɷ֍e\x[OP.*RQ&c-yuS|T9]f"(ۤ#(sPQlXu3A ߚ}  nzvZ} FR Reƍe {)icb](Y6~yW;SZY5)*P mcTT*svZZ`SzLCKR JhMLp*XIT!#N5]"AVCooEz;OK ! 0p`m?nU8,YF] j |'=c t< ` "ƉAHɳ4%!Xm"VB@IQRc3Y__~zFu_g;5}:~uv~ٗARhwo#}!Y*V,M&PT[2A%jSRӶ""<իEώE*|T=[Aţ*zXUˆ"S2L-1"e ,I6Dz k0ŋ˛cю$!Űڱ; ^;&]ߤ[6wz QPV -䅒MN G=3e2)-MT-*X~C )g6h)N" sL){Vş'}uy[=j)/m3xf sg>[/Uf]&Ss6Kh; (D?FA ׽c-xsC3I6sDEY2PR|5FQ,MH QcIcX@,E${"r[9.C6Cw_Zcbf,O٫x/|Dwly!胊 i^qb7Ls{l24+t`QtTo3^ D\GwI+v(j}Ҩ$PdؐkztJ=Ƣuh 3rtR*zR8A*^~$[جk{>ͯ#5d@+L-%V ;w(\ ݿPКޞBz *}0Ma~dQ 떨 (fqPY9HU_;-w}UK{Eu;k.k!`C~;kэ;$`:5<{ 0TpNs F#4N#;:(ӑ-Ѝh4褏*h!v|ـE%((b@@qsʙgɐTJ$ 4yA6٦ :ʒP$P3G9 nQ.>nZMQ74*IDԔK `>:B4,QՌu}~@x+7d|mNLW"EToqeyp1 5B&wR:ֈXզjp(&X}c[yꓦwu9z$F㣈m TJ 6$yk*9`}JQX06LcښQ",%d!Jb1Ks3>q rth4"=ie`K,Ht dɢ`mN-5NC @5]7ejBT*Kx7K 7$k0Чf[C { (iytOy ;V '`v%Gt=EC eTi!Y }HjN.Gg:,d6gx~'-XJD! )dvȸ(\M7n?ș~P%(OuR_k׋r^駟2Fom\k_o޽;_nAIܫW'Z[ծ׼O>¨n /;l=Z4'>Dœ?\ϖ?x j+ד92<[۵ӿ]k0Y֋;?[F[GsHۆQ[qmfSX~fMyz'^uNnV =s|>x9Q=%]UQg42tlX||qk׳<(j6N],+5W ԲLϢtxǗg,`s?|??;x9{fۦ 7O4-kyG C 㭆Z!z{ bW2vޙ+ֿvixk2͊K?·&{ QZ}j~8FqE??1+SgSoB@vd;>y |;hQ>'m((%9+M JI8(b;Lx7£WK{`U2|,C )/8Q^Vy}Nda֚:h; t5<]{SOwiWO5{,)Q%a h_$dO:EۄF*5 {TP^YN&_YicSͶ!BZ i5ՐVCZ i5ՐVCZ iiYmo u5nm޸y6o-B=hm6oq7nm޸y6>@ոy6oۼq7nm޸y6oۼq E&'"qA2x;B.⧼ ~]zu@:D0 M UpzPx5Mg Z#L>*k5*&(iRW>"AUT=RZӜJ LBD%1R0:K JeDQ[xb ݱoVYl;œufsip3L[py|4BzOureݡ+^F TQ %ء|:,<ۙ ^cD"L /Ѫb= t@rX璓%( fPxr9[S9')@<,Up$ ^6=A-O ے4w4.^\<CՋ[BGt lof|>YAr}.O(.=ɝSt] vw]{g'< Kxy}uǭErNgtn-1^;Cnqn)wkyÝ/iWynywG/e<[G8|$#;pޑ믚#5mm??MχH^ZZozn7!7FGgtTޡPT N殲&~OIuGS ѕ \3@ H>d#AT:BJAJǐrTS(L6:6G/MBZ<-Dvv/جF폣@W|搞 sI0g,CLu#S.""TY^q3 0aHTg PbDA^* ;~1$X Ũl6G-0Do3Cy+mH>m)[mx8}F5\txfv]lJX/5z n?i}BK7{OƑ_KC16Q8~_GX"<,+(Qb b[驻=o*_q6!Gw,\ ЁX泦uD>R?0Iչt/ A-L ttvt4ٵLƽZkݴTo "ZyuoZ*aĕ׿5_Ib\}b*fYPIjN"L o _W/kU56UMvÏOq0i 8a_v>-|ts%R}\߿xժxVzzZ}HN^[aD[_GQo˫?~lSw4SOfb4~{~֛+^{ӝ;o]kWɟUM' {aU ~<=^]ç&RfӋplZx~g'[b=>םN.kw6;4Ch/Zŝ/"NJE:ZUz eIk [~L4B-]G)!B͢z@71#'߬foWLϣw)me<&f2mb=C<0L}ݤ3AЙ!w$,u.+o#x7SPVlZ9?k9q NheJeS]W/qfq()y6u;K4x{pL)` "PA YnC \lڳܲ\|<:E-@_^Rc_ HHA@Z6xAZ$+CqhqI-BJR2czE^[znlzaJ }W [ϫƹKRιOe^?M浤R1ϼWofiK_3 h洪2^}ayUkOnrp+8 X3 ߿Ȯ`/p?hC'J}({ivH8BY"-e:kDpIGt۽6xYMR.dE'@.2x& ^B^=x&ϷYksMBt^q;}(Uaw`?PꬹLgeU멘his]ٯl|yN=}䆻:ϽVUPD/CGd4WgC-i*P9k'AT[j!=W{%04n77UؤF% &|r$DxD"AhsW$Y=ĜҢ1=a1ʠcQcwCi GcB](f୍c!D30sh1pHYbHTͰH[[_\ئ~E.m61431WnЮ)j}psOdq+}.Rٝ4硭R%*_K}Ct LmsulQ8*N+:+_smܞR^a JJ AYfrCE]+HCY㩷)-H,r]W^@Wlހ^5#qI2!%: 8%shρN)j?&XdT mec+@GB H6r ?KXG"ь#0FPeZJ[vPQ pX Oqk>^wGɨf΄-\GEDJcJRH連MWFHB YJXd=II8 QQRL4g,ݜRÌ{Ŏ1hβf"ff$t^Ǜ7P[Z?2N*1!&yiVs $0Bre&I@sa1p懍Q߇gƸ-~liaHwqocAMX˜8R&"K#MlBeD}ketE9EPmcS`(O4#A#O&_g1,ٔ/|/ފ(,)jp A9<R.RUBܵBŇ[KaǦf? +%oq Eu -nDяRsaG?.êVm} #BkR[Ycȅԯ| AM7ٺ AEAPCATZ2H*Tu8҈@._Z+A2␃UI6$5*XӖk녥խ@C$ $F:@="\ 52u[v;[" C2?d4Ui0UՏB|Ձ1-^"(og$S$0LM^dC0Lz A KäP$2r!(%LZ:-b{]&J1_9!0(.(Fr/ՑD%(mMUXP@dwQć/`(= ]y[=IT6>P\Tg0똄CCDS\ K2RwvA1]5QO.a'I(us 9&O #5j(,r\:r^q ;"C;)hV HҶ[gD@C!݁Hf1Z,賊q{ l1ԹK~)YT E6%%GkQ'c)`(_%C5x,$ADeǔȈ@ȣy̕*nX C6Z4 %_t6ɂ5չy8nO4D?v&Ol5؆_n&NOGo¨y]!wǏfx!k(o|x6-U߼;cDê=Z9(&mj!6.цH륅GrtP~dC~:ȋ'pc—ڗ0B/4ۑ~)'q?-"M{*@j0ލwuMr &{Llx@kstjqk?`ާl)*6skwyxke=H[,mf U _ӣ(oF]S}.e7_TTMݯ~l[-K:]%%Z+EjJh=Efc2B/\lqQ yFx}i2kB Do6:1`$VKQDC'2 }z`Mͣ |Ru>C.Ә:Viq iSfz>/ԷR|n /جqV7[eѮ+|뛌yT=CK~fJ`XMq^Fhz7nreeէoͫ|̰)E*`_ly-bVx =DX)@8_B{ᇛՅ|@ׄ^oðQ2̹m˾ɧ^>S>^ KgB'(VTI"j9#2Din Mjo/5u6nE`;ξvZU_bX:7;s&r7y3ߴe6r4ha4g|v {G{m>vrw#?Y­/p:_`m#l GwuJ*NH dp,?v'+qMO=wؚG{:{ +dq(vvST.4vPK8i{VPsQë+/_ښp6NZ{?~^"m֟c"۫prɼ~X]f9ِ@<PNcN]x y5,I)HCDTjjqdzylkj+ю=TM [|U908d6YzS\KM2HqIS.TκxZ67 yTJpCc2 H%傩DU p 88mas8kV0{j5/GM44ƃ_yٺk>zwM;te]]or+B/"Y\d<)Oًkaeˑ佻 s8DzW-ig30`iF=ӇdթsHvw]oocfGo7al\)Ujch Tf5 44sOb+y5$=ǜ7%RsbS))B/qr`3< //.~_}GUryxɴ8Nh"̑NI;NU$@wd%"zst}=dk/ۋjwVvLo1or_/.ZZ=ϥk41\z)xf,W޻f+f}RI9v  kVx:G$ =׷RO4lŇ/MZf='W}of% NGH~|?nbn&8N}~gě[ wq&gMYdV)c^ϟ~vSohz] M>,h(HHӎ-feXCWcѥ@3/R2HWTDW}\ ] ߀]g+U%ˑd$38] ]n4n)t5?] tut: +氜%W3wt5P!U'-عLBWO:DްG3=R h}+G:@R/(3d+^ ]Swh̑ ' 86y czuq> )M,d^XtKE~al{ iSgAvw֓A~FrnFYz(V]eɊA Z|G]2 ~i׫߫U!XmeAY9WdbK m^ǭ.狷Fv5;.|fztwWh)4=zw(#ii-%Jv9t*}+tET `m$R h@W[G qX]娫-fnnjHWHW /`CW =] tutH +pe1j4[pVDWAa1t5.gehɆ}8*%M1.Wh1fhS]㬋;;k3\vhYvCWۡ_tE[C-?z9^ih* ɈS]P`V^L\92{ѱ~~Itމ{wylNž ju򒧷{lԷ `1pwmzcJ4}4툼3+3hޫʁR8%`/\qK6}Rݑ݃KjŻۡul["DWpp7ڥ@q (9."]h+]CWn\Z%t5P:DDWe9SnXUjܷwtmJwKZ5.Zh]{J97;js >>,tCvD8N~v~>Z &!]?ts?wD~A>jD>x~QwIgdczǿ0~󋒮S>oQ>||(n>}x]%es zς߭9Y6, ҇Tή8}™1 oG/70JWܗ9ۄ|'o)W>iI i֟o 㞪ĝxlǧ"/@$Q>1_VgtDg53~q|u8{PO11mhU[ۏ7~cV=dşfmr$&j3ɍTtBqq|Wg-$C|m}rvշz @]\'퉊1JW{dMl y> %#I* Qj ׳kjsL9S5BR*ΙT jOO9W[J܃>cGqIC#РR+ME@q5K`W%2 Z;5hCk5D&HZ(\ CE3NWJ='uI%DN\.޽j*z)5[CCdWۥ2PR4ԌICڞKA^n'D34f}r< ӢS2>{/HD~sJMN5r{Іt<:Dc; cOi#b. j1=$mR ߚ iP(@x hD(?ߥ_TڗM61Tqhi/Z3<%5'1g-$!59 FV TRAsDmh 5Tb@xבbI|Xwc&JqM((E?E滄Ť) k)6 \x\E :ЗcvN͊>PT|"OnֺX\).!E*CLH:BBc搃G$XgP(! S#XjCu:궂a5)^f&H+uS`x usQ[ `jAQ 4Bs8VM+(ո0kD(QN[%}]EL.5![8%OE"%l0 Mؑvm\s|/ 2: H@Q7f=a=g3ePѶ J,QB5 )VzL59vx X.J JM  c̆6Lq9G1QꛡDe(MJ&VW!zǒلb2 \n^EV(!.uB2j3|Wl%Waq2M!V"(Ɛ $,(  -9H33KPFt[s : 9 & c6Ks ` 7(ppR 3Շh CA[:*NBaVZ h\cMEwf1%t4GQF WPo9@G1+S$_{LVp XF SayG a/,,tF\xϰ) >@&52*-2؇8f˭!`Fi0Fi0/1 rHݗmdPG-C@8 WfUS Uߚ^mXEމilxa&k) _ #H b"ҧd6~yq~׷7XSELsS mF񖁳0Ѣ%DEtȋY}L! E=|+h a{v]:| $$hL)},ZJH-ReJ2fhNJ@A WR()f)raO<8ͧxǴ g3|4ۼܧ2-Ph6?S7MKpo|)p>HN2 +g) ǍPDaf$5$wA+9m "I "D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D Ò@ T$ѐ@|MXV| 3r!@^H @D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D\D Ԍs@v$X@//0""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "z$0 {<$ƈhH K1Oi$%@""A$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@/Z/YbG[n1]e3~Pw*%y4ϟX1K`.Y;p U9K? $<bz; @ O}X̀yޏa>t^҇.̻?2@p3g4V*YuŨ>Nt6=>sF#n?n77s)>\2f> /.o9}50^rc NW&h}ex o:j<}R éemnSu#l5O1vU@$˝Zu9'5fCj'4ne]¤pU;9zht\:P"0X˾}wD8se}>oD(kF]#YaE4fߦGND4\\f67+w6}lvc}_[!LSPϴ<oۯo_ Os8`pv͙R'H L73LvEy70Tݰ(\],.trQFT~~:ոm.F?K]0厃7pi`/7vZae̙o)ro$rg!'~#~Q`}lEMa79;=eUaAfh=P<Í9*B3.8g9n߳/myc>9."CliŢՖ[LX~/Oۛ]#~s/n|2wpb?|q3ݺt9 ?m^AvSSJ7dvL0An@ߧ4.e^gCV-vV[vp4_}m;gœ#{&F}MQtA$ý#ϔR hh4cZ=F+7(~9qJ^@tPķsy6O/&mRUm_}r𭭗\q%8&U׾bw>~ʯwxM"UY8de6\b`_S}ƁZ"LTR7UU*c3sv+c;_6j"%zƧw8[._vv=K˗$w? ]&WbXB1+V;NXQUc4n\ԉ35UldU0 Y0eShVl+v*/ʃT*Vj٭q2!桨vޱjKV#&vK"#0 sqs\"x :κbaƥX\;QzVmlH*_red5dD.pAsX׹bCh٭}P?Ta.(Ǿ+"'E$E7<V{߳IkVkeii2_!exf}:آLyCElfnE:/tq-!u6%htto B?+E, h8(kip2gYpl€HEH.lr?= ZjVs5(^_/xs%8:ɻߑ'=y+DlM^QHdgJ֖[ueQ.ke3,VyqUl<'ܡ)UD\|Q<,DBT vczȞ7z9É5v6NSSRzGOV lL&YiY< iK-FVJ%ͧ|*dV.9'.^i#TS3[s!',kX܈p~>; Ke-fOI e|Rl9nM]mS͵n_আK\r/ p\`wK _~rsݯgmMhƓ m|e?ÛPoUBfՙWup 4Eݵ{RrZT.C[ ,2͏.fl_a(LXh!l $SB#hd&]ʸY65{apDgR\Q[\eƭ8"1jB80~l c‘ߑxw&ᰫvU3U]O.,C>/m<P{A$B)Jc1pcbFT 3uRd@qA5n2ΪdS:g (q|6ӚsprMf6Fgu< &yڋ]_?tWV|&Re'Vj*fo M2謓1Zĸm g~4Ir?ϤJ.RhKsrOj틟Eݛۨ;fWZ5-ͼwBf|\*TPJC\)+iq&yMA8TTu$/J߹&@۪0Ϥ(S'^Q\|>;c׏~p3O:KIؤ:Π>wF`70v}QzW R03SJ<+wSRg'i_~i';R#CgGĹ&&c @L“RX3).1.ڃWOܘՆ(![C7]xOzٶ?")fp4CYf_EbiE}nuɜHHbH!PKd.KY{{^ zX/p53\B-IZ!:2T!ԘXF tLBрLIa-ʬ[ zqfB%1j~%馏}L=)5ddz;| 1ʤ{z؇pRmL\%\t"AQOЌz⺗-;w`wf^E:=3儬ɨVR_e{~vFؠ9gLZD`2fKo)B Kf) Le yf1!s5r4~}_ SƁ1rpKh9^\B8ߗ/Sce8%G՛` } 0*~Aɸ `_L2B4/%WVcOU4M&b~Dꃯ}0Y;뮟B HeA]&!P&Qġd ;\ fl_~(ՙ䑕B{ARf! ]͸灧\R<\Շ_UbTB6ϼ4/"]U %B ,qut{U2f2Ǭq2fkm9樫w=.vսż*l O7%QDy̎t6G,,im75= ͟2V`Y.$SY4Jp''W%ſW'{}{%M%k Ɠ$")B@USU"LNjUR n8U>iiRb(@āb8ㅐIĥEWyXFheSXΥAQjr:k^rktUsYXμGn >KhN*LRDҺx8!"UdҖ5[1v\ٹmNLg,oQ ~o~H0p2XӸ"ÅehP۵£ b(dϩ'3kb݅U* Zi z{#,OEa YI nM9씉1P -? 5,x)$8,DŕM8uBf1O4Bt م d!'3cNlT"P`RIԏ-Pù;)SL3ޅ/E/:"Yigngjcؙ8_ZS {¬rZ9ΥG|_R㔻wF=鴞2!(|+Yl>x'̠"P 2!&(?ғZ D8qqۓIYR_op%:@zɻ:P] ax epPOLL}SS~TU+UBzm?>URtr5<]0YF,-@$tv9kԉOޫ׸O\Fw0YS}wgz{1DՅדE`+sn~έws`߮'U9|r{P'vmO Wtmn.ژJ> Y,߻MUwapFȵr nFybO5R:>2NuS|(2Ficr'tȎBqtʿ^_}qsq'I %l L~7 .z#MuͺvZ&G]Omrk}|RfaV HO߮oq({烫aN)hsyb"*49J%o#jTyH}t 4GQcXEZ4hr|J^~ xm uB%x!J(yIr(HpȌ F"dPL!BFg}DKFQ(mDn$7F r0{͖8_4w$O qpǥn(B7e3bZL8]3*ό)H-^7qTR)H@jcDcԷP-T=RZܻ2PJ!(</νDg![1hɕcVB Jo5 %4YЯZyAVsLJ6Caw88\.b>Ĵܨ4Gyx V/'H&f!9mn}#J}/C;()Pdؔ&| {D%3392HJ{L]dt[íqw薄v{n Ƴ&'\cO)ng, f4]»7=N6BQSZ_ۙȍ  CWK_ߐ [&xw5r7޽yxK{Dv٬; IjnݽM廛7^a4lhovuqݘwA?zs薆BwY\>da|V9]}cפ_|!Kv%[ \{Qsku9 Jd^>@fr5DZk=9Rəj9W[ʥjހaq-8)Y| 6ӜZ~ O*u@;ع KwU4oH%I{EbtAI 1c\OE Z9u)mIګM,>͢:[ߜyZmlvf#RWnFn9  ^yc5@b 8P1GgPўkǭ31V"E"^ٸ `xsARo1[$(6:·S\\Uh֥-ѷ`gNן\J[S]czJK+dq`r%SUY*C Z}tJ22Kh@`k_|IY'E[ӌ&=kEO+y^(m*%P RF$W3T&(EyV4U Wx+V g/~y$~GAU "qAҬ7"Q \v7ꏲd,j.C5kvWp(`hQEMkv]GD6e7=l/n]m>ed @di "QǨ#R;Y6JR?s?)ןg:1ewݛ̝_vupi僈>Iq6dn]xMq:9/{rA yW9[wwT-?JPxdmiӝTn=oCPv0KǍgԃtŧ;x]TI8 ÒNWmoLWZ+j@o+NE߯`Zd.g:^{>%yw=[\O?=-*ny$zG߫Z||u⠢rSq5tU/f͞.7iW.`}k<3K<;uJYo:?=D @+!K.+ʩrw}4  7|] oF+DpJ~(vS'hEbm,[sHQl-9DL[c曙3ߜsxL2]}/u4-}W*U{&abzxݦի;Jσ|Pk_\2_&w ^]O7?|[T^]|!o0pzoG޸z%M1z8ԏXbe1r`C  jXK?FS|aʹY<ږ}n=:sfc4o45<'dLu̇_3"j."nŶ(% [cfQ~d2e-r촭O.~hc`.`OJcrPҔ~fӭS[Z t6dO i1vڢÖUaLǥb؉(qdUDc9ۦ8@tz=W³}{SHT) $,:J51O,S1DPLɒ 1,}H wP vS6l$s e#IO뉴1LiLn !&KhewwC L˰U'L6yBN=!Na{}@\qj[OE'Q'Q&QeʗDV>Q4Kq"Y [ ]aJ}+De QJ \\h A+Ѯm=]JI;J1OY!hWcJvBJwtutByDWѮ-%]!JHWsɥGt{n?bf7] ]Y$weqc "\K|+@䮁 Q WCWbǡ=­t>v Lj T\}\y),yWcB'#W5mu), 42"Ke^]0aG))L\(Jp>zu]]`uK b3tߌFxyNܔjb,+{N.|/yc%H9CY(B d4aBF"cd4QhB2a$b>,(?FA4cUBI,Ig*8#hCY4R_,@8kE(kvWgќEC%ѵﯵ/TVKQ.S/YmLkl!R倱ȁ?_ߜ=f˜`A 9Y#CJ߀S{Hv̈́'FxC~FhSK4(hitӂ(F-¥BBb] ] kGt%FBWVɶԺ+)aGBLBWVg(JIe5M) Qm+DImGW'HWZZւ  ]!\ ]ZNy Qv+#B]!\BnADi::EJrᓫ7 ]!\j}+D~1Еq%F3şxob" ^P8?۽@.V(]hL;Ų0qڄBZ+D1թ6it {\C~6Cˏhl~n|SI.#z\kfkvvÜS#ƜFBBӀVQvF\w4}4ʹ{DWL+"7tp)P-IGW_4l7tp91׮] ] øyCW_ vBӮNu6l7tp)et(;:IRFKQwhm;]!J۹O%BBWVDwtute,xDW BѮ.ՎhjGButute->2+˹/th%k;]!Je;jJ8 S˚+g4jb<1ߺ-4wHd&Hz-&#+ x@KZB;IGN :҆f(kZaѨ,=5RiR[`{`*BC *FN! h=M#J:>Ak-<+lRt(-] ] "LzCWW0_ *vBFttut%)'c/EWWPo|v˶]"]an1wpm+@)JSeKWX+o ZoJcEvB"zteDzDW#Bt(2I0cխd@]}=twz#u|jckWG\%k[ n@WG nyRzM'E $6õ<r5hFoO/'O{Q2٭'~R}KD؀=Tu{f\$lPohrꍙhn;M#ʶ9;"4͹ZyDW7tp}]3J!] ] n6 * ]!Zz'2vut%S' k+ˍ/the+DtGW'HWJ(*}A+˨/thh;]!J::AFiXyQp5׾5ԝvute0@' [ ]\C]!Zz*ЕJ]YG ¥|D;2; f=2]59.]5CK %oY i@WGbI5kXH6'evBLn /'4Sk|9CĊZq'4ԌTGK^s׀ (\\ >W۷#-Wô(茦&$RPt0V1wD[%KZ)ߊ<x1=nmj+ tV5 xvlVP^ybfrtV͐lCh_ Ze=\O G9,&bwϗ_=/ r)媘d6 '~%JaIAT"F$*iM"bRs;kW @/_D7)oM3܅П$&^{~ }'l[5 'xl,TĜKRǤՔ8Ֆf%SkPดG^&lzqV `\lqTx ʟ%u/ͯV^t0 aI/|ʍn5MFU-!Ga(1Z*ݲЇ1HSL)"Β4""$:DTDi-DI9>3`)kdNZ!P&JmHZ9Wr^a@gPX!k4f`"8%- l(K7vv|6ɺ1YwК&!IdcRdž[O(2X2d2id6(a'_jie"eQF1(q'"XR*&b *Z;U%:&a'RpUy Td.3J#JM%xK 'RIHh. ̮OCyi IY\̆a~4bjm,-G{QV)H_?(!xWo=zgX xypS.@/dm{;^;AI'$#ejCx1!D Jw#*st5``w'(uD#pQRb1Z^8+f\pWerۚs$O?.Ǘg ~7-5 n$5]omFäMxR "|]z8q˭2pU#պBrFQkUs+ >Mfz`H}ڣ(_w A~P2kYp\ /@G^~ 7_xu~A8z`Xz P7_;UUCU UUs#vCz;[fz:T{wX0fj-&^ӗ<図tºhWMp!] ^0ͯWen0֌\?@+ٮߗ/Eu %WL]o}AIݾx>&HIm$OF\P.FǩML+& OvX*28-ciF@OK4S* _PQ^X̞WSzT=JW˧m8}e`6E,mfXLEj\S#*ʴq֑,K2#KI,S)"vtM'wtO-B$ߦ~Će뼵0W Fdq^xaN93>Xy;I?O CM蔝>('(EUۭ!U186$B-o,;}P4W`q8o>VdŲ!XJU x2X(=ЁՂoKw7X|K즞0v| %H /A˖!L7`~ ZXtPIc|9G7҄85Z@#iT4IfTj%QXxɆAOXE\7WL* >^q0p+B oC',^ }96ߏ'[4~[aG\wtI.17lmm;ՓҼu7s󼿙p%.J AG^tnذmrG+n䫦Eg-4miIӢkXzLdZ.!Ui]w{0MA׊XIpZ)CU,&xlS:YFxj3Z,tZw%+Pp_ھ&m.1Z[`p~[ԻeUVN;e=UY8a4~qPz]ewmH]'Fl.U{MR)4K+ %/Ae[ taDىMuxݧ:?whw6`=۞.hI;WL>Bɼ4A+}Q{*FXdpM8*c@*ebR!ftŅP4آ՚ >_-_.T86g(tB/iw3^Wt>xz>;2#j 6#M;j\С7<1Ve&ZX} 78ΝӁآ:uwf-+2^f s%N :,:ی!,@S@.~zrbcG*3p,9"I x!Hmc`*I @t&(u2:3st Q&%є#J-Q#IK KQr,\PZ2XLZ3xloO3r1uWZ|"=dzffXݿnV\ygécB"(6gLq捔(X5֞PՒS>Tl"6[2&- FLVx3qv=.#o*v=ZoIzhhGn`wT`w%0ːu3Od^{mmĚU论Wy5fzzSey=% ʨWl?~h:/+?t@VHA۳?MPx$EMiZ*4kQtU}'Jke"x(8'qR'pJN3+%%茴IY]$0ПY2ff=kUf~~T%~j-}n >r eI Tzx*Hco(u:.PuZFde>T RΑ%#3iJ/]1gk³!gB )EBeEĂ1tCifSρXE% 5XKI6 : \w1 <_vU؝hsVo;w\sZv[[ȭͪ}QmϾb1_*mN^\-Gooxcқ1;sGvk~2.v XcbrV]W'~X *q>; bkgnwo6qDq\zG,WjmV=`e,}k aVcS?~菍hrXzֆTkt5մ_?_wW-Ck ?Þ_~YWwOFUvt J6ޞd>I^]ƫͫj\ m7cY.vއΜ6S+|1j>{v#Ӽ6j~ׯ85[Rv~]n;g>!?˫W"//o^o/՝ePb{f5YWow#~X]gd6|[dCwnl*٠ҿN~Bª>T5nO 0rMQO@o_'YK9!W^yG/L~ECm6ԽeYӛ!;n+s+]- GYlA{Cv33z1[.>SzϼvںmUt' *IQkV8Z(\$etdX̥#zM@T^w<{tXyqu.vW mL@9lc^xXs' Oymõ6h/7-|j.+)t9+GFM[3SNy$牯uUNӦZ>UEz#T n/$C 脘l4Ҋ$E6CAKHE@L"ǀ"5a1)^rs*4+ }H{Yɖ415Dr A4g4))i5E&VcB9HJa/`x C&JVy$ CSl򇯭P~a&XXoMYD<&%&Qz$|f^zWb,AfTӒ`Z,nv=NntJ:! }ᭈBT4E5 ɛQpJ́7GJVH& :vB:6ͩ:EZ;Vnq 'rZ%{Fk Nw7si*8UJ_`{osjFgdMaM&2= E ǧ`b2(ǣ-I6*kh,k@ )3J(~me58X!tNڲk%nEͯ BYp)d 4['p285Ϫ ~ST*aȢ'µ"+jDҵHLʼ1d;JYZԪmuH]!*›ț@m''qTJ|):LF Gk3 瞍5`URn[Vf!myp϶Y꽎.^\L~8?s.HÊ  8UγU%n%"Q#394 N-| N{PR ʺd5ZMNrm;C:d0"kٍ8Ӛy*lum![ՉR,[(@D]8ƒ1Ym%/ve[^*.&ˇ!hʎHVfeoDbH$616g7.U ǡDqk 3^:R\2FTċx([!l.z^!M1(:Ͷq6ZA(ց"e8lY?>%v NSdfF2ȸ8[<%_gUr(.Ƹ(\pqwbbb{(  24xYR$X(cHY\<. Zj<(x,#wzcg\jF}#n~|G% 8f}$ػ޶r$W`v}X$XLo?,vy~48v.݋[dYD#:N|cO]VSE+ZIR).dBe@a*Zno,0-EE+mSR6DZ #FfKאrtPuѺJOUzl:3Ng N2@@ (I>Y!h ,~{k)#h^SIj-b*R=D(Tڃg Φ 2VHG բ$HْU4@(:{g*Elb=32W\Ϋ.Ku)bUSTMbxcsJN ,(]qMư 8].x7ho貏袀+53L xf<;\M?-T7ћ3g jvhs۳rK:/w+,Gq&ŘuRD`d,W l{~m/l4!ZkCYBp2:[ƀX%ƾ{F"s!){qx9/| a~Mqcld1KbX$[+D'r֬je qknigosPA8K)RJ/!+4Z 2()#^^lݱ}/Wq̚*PP eb ْ1!+YzM!l_5 C9Q]P$C)V>R,JBы A;@* {G)h.BZ(Zi(H Ԋw4I<NDOˎ *$!XmExF  8z!AuM&0kZ (YCQD @HMlb)Aܿ[|oMYJވQKVBW CJY]$z,K 2bly] |9;|sq8m&K2M/zK^{2t[1b*Hʨ[eZ=z=@oWgyk.^}HSByRwjY}aOyjL&7ysowUK?[_/?\^}\acU|Hc9{ƚ}Pj}s(?:]5C(hEϙ Cl!PKfl~Ohi}R[hRʤNzVF5#%JXef=b+ʥcH[>C>=x~hTlbv$ u)w4z~cöǁ`Sy$ D4d")iOQ:iBCz=ۗCcJhsI&!.Y@[ `?,H9EţFL|3',M}w_Oy͞W/>тgՊO|krQh FVLqF5F P;>KcC4*o?;w6_I[lVR-Ӫ ^#+3xsde]}<=bo}ew}mw%jyST.>M@c0Ҏq{goZPZDk(:FW 0䍲l4J*'a+-oz AE(I!e_TN!"쫪,EAM8׳5 ^;v>$;#y~{}sy鴧~zyc٘%%>\z90RY9iM59m${]BL[ucmwh@D M%Pd'qKzNGOl/0Bd/(JU;478?gW7WWweUfk9@-ާU_WqQHl< 5gh3XQ*5ҕ#_4DWm稽% ]1Z)o]UFz5t\zw" 襴Owuݕ._pu7eBOo3/˼U3 !^2`2}|ec~~8A7wv. T^E_BJ JƼrN9缈x3YJ`d.2s@e ,҄EQe#* /)e Yh!Tv‡woG.үo.gq?۳R)mwUdS/~kqzS4,\[݃ Nb(ءrV殆%ݛ'c>sD,$6lfp kmV;cJ2Mm^vq[D[NNƹ(wئl)4[0>Tɿ 3?4 M|7/xq#;*T)Fہ$M3`Pt MW +J#M!MCDCtŀUh:]UVtut _KULbJV誢*J51ҕTKUlڡ ךV  f(atxZzn*\ی3h 1xgZtHW/BWVl놎ڧp]+tUZ:]UdG:B"R m`EkvUQѺ:Frș;l*\݌uUߺ(Е-'Bmt`KW=u!YWbXEĤXfcGj=g6go.} =HQ}`޿ibKM;t_^ޝLJx.Nk#vw7ey[xﺯA`2<]9fmr-2tH.K,FẠ h!L;2-=WyUޜi2C%R.z";"wbeU0h]R*tO-}I;_ɮ 64Tx4-ޣ( z4[RŸM=pD[ nXD h}ڝhj4]Q.%4}<4 ^G:]Uh;]1JUyt+,6CW D3*}ۂUEIcc+ ]K6 WC+tUђ:]Un!];gW ׉f1T0RQֵDWP3tU ]UntbBV4ن=Gp-BWUEn#+$ë]1`^ ]K'BQF56{25:j X؀`r!d2J,d 'FK{ЛAٱ?K3M6Zqeɲ"ql#B#hVµCh*JiG=I$hA)ЉBrB ؕ{H ׊f*ZmNii!Њ^9ĎtEZ%tjMzq:1v:f2E13xųɻ7ݻPN? /# _nxjo5*Y?3޻'t2!ӅObه0xi)I|}Ɠ b`/k jo1tj! P>:}>7W5L ">a2 ?0OTz)rO{n1x.3֔qOf,+5}Lo3KZRr@(^0É #K۫x}1=ipRM%KC*y-iCD2(#Ő&J_"3|?~7*=o緼t*w/O;L=\Q'I3`7{y ٚL PjeҎu|@IY䂕| J G!$IAQ<0I,V$?嵓fK6yZov!-ebJhsB8^sp%kYX,6ڃ/޵,ud_O8Bvx1zJ@4ZߝMDK>ԂBh:'nmSmik.q;RNָDahe TK&dkolu~v[f-ٗ5NsBno!vҜ̭WCl5<ٲPR<"e0f5(jsz`,fLt.XT342,5jf<[-ńcמ3mQ̔=~oGqɶGck@'3Clkqƶ >7XlNÖ-OnadkQ;ŖᩤT~X; &d6SvͰb}56ZJH3S$THnT8Uy>h]=Bl:d{ͷK cHL%;es+Ƭƀ*2` `#k-ġ;T{|>]z};aYgJDW( 61L`}XaɞAEEg `N4/Mg݊#l|k$(Q*TGg<* %~YT9ez&A8ښ.X[ɺW6Ņ%ms#1זh^#5+AMNk8)faV#gu:TkZ'w/CAQ|*vs;1N-`M&wo@:T%0VlP2rlgdl'X(% E'alW(Mk*3} pd'XdU~%ٕ=[Pm'fH574ZL e Cw[NV>}@ 5H$&@͙f+|Jm Z VW% 95-QXwl@F(`]0r؜cZw } S8l&y4:K$6VP=R3#^}L!-.x W5MLpO5r1 j`Qٽv -Rh BVQ >(exJ[jwXі2U3rA`X?Q6J σt&X!ٔ5č,C$ fz!kZ. py:Z'xw؛GD]Opi2,j)차Jfh{j MOCבp4DF΂Zm'ȣgziYnu3{5ED?w̄W; H>=Ӓ.A>:Kkh.1>y8Hv9NMcZ 0pRsb\2 '۽zOQ'p8C}`;fA G=Ap&9%{Km\U'mKSX`jCw&f=Ԥt C|M]}7 \oХ5_PGQO!j)? ՠ\=%9gqfsusp{6Gq;^W؇܀X'p&@5 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$~}>y٣}5?mϯ<MJ!/(M6.-&䅃x)p%OkpK4d]K+Z ] \vK+A[JP&c+_]3_\gCW@? (8#+YLaIt!Еܘ|t(*:E+CWЕxt%(>t*yxIt, \BW3%Uܒ8svA4-Y M hBӂ6Yii͂JJ*eޘC+AiQDWxAȀbJ:t%(=+]!]1DKڳ.pR h==;AyoCx*`͒ԕٻi1{W6C+yѽc`{CWRJ{t%(]#]`lX`;EWw%ho ʨf*S>]0l BW6ӡPfN=~Lk8Ow˫lp~֗6,%F뫋wgï_]r˧lwmu=pksdZ\3_,x1䗛aP#^ZL:q%8?takU W=Y^Z9H>t=,55וkҔ'q'29/V<{_TrXڙM>?(;<ӎqp3?Qh=~PpXhH͗N \Z] CWlǡJPƤtutO)kkgl*a\cZ|6B+.0P'ТA6AOK_dJe'nfzж$6[ZĉofYqf{5Ʈ04;.r?Z\N5؁5$ǯD2rKqv2OKYl(KKG4lr뿎h@)R8tVgI)+4& 0X_S'tZ? 03((?_7hadN[.+]JvxsgnyI'21,'pU{Հ[Sj˧VK> V׊R_JOZDpsh=2Wa?{`"/*qK|Wn\o;J_=S-yJ^|_@j-$0FX?m%q)v\'V-bKl' ))Ljs`(|*;y3 *Lȧn-x0ӟH3\%cW}1W-wn%As M^ON0ÔR7 xo`V p {\}J2OG㓜@aտVz|11:% b2ujHmWY89[l|8W>RNfyMGWI5Ӥl6o?oG|9j~"aVSk {#Yf ;AHYQmBJ/q4ABrd 0=O3\áyyOjr?Ǔ$4vg~`)^օn3Uc /f!{}ևk'Xf|:>K^sffY ٸ8jn\ʼnB:PfLK6`~6?_y4_/aN(EKf-D~Ze:YP^Q?خSß`˽i[n6e/>[Wɂ.nz4oj nak3s6&x퀮oK2W6HqL;Mm:hQkҏWӳSrol6w^fC{R[r|Laep  e0FQ$!sm0u,|OA1?3)R)iS҇MsC%Nz`qs 8M(rneZGEdQ0\"-J))$\f5Gf'}Zf~l42-jN|<Q_d&tm5?zVUyV}9ۢf: _ M{΄oiӒN㐶A2'!=׳ҝ\ѵItg8t#a69ujf|@}ba KhP?7_Il057A ]|_5x;``gF;AWaoIJ.zK&`,H/``6`0=c-M_;NfM0ph}8K|iaVÞ##uYzDp WX|w>5ϫY0TtàQa^bɄF^&Z^L6,Bu6=-`eW_ǜu`x#bN Sڿ.oVkٴ+F%6iS*#D܁{C(pLԂ{EX F|TTd yg"pP$J:b̊wDðq];zs/Q,5[Q@v0e7qv z"p-`S.VfoN/m&=,TxUӪɖg'괰J&csǹm¶^Yr]wŲ>Dİ؟DĮܛDC?QH>DNV݇!0_l$a5ثqK)AxiRҀV{ȠO;t'4D :݉ЮLM6\ 㧌 !!:a:QF:c!- JL;uRb&y$LVi(,<R5c3vv-򜑾{%N-f|o/].pzvZͤrEsΒ.a+ bL9y1"5\ I9H)P*n9؊Åelʶ81]TxG/10 T!gi%1"AyA'm1 nM(oᩏzIYwg1\9䌶`KjXO@)Xs`kE#q;p5HD4 QND耕! *RD NzAUS,f qdFDcCs ,N;$eu{ 狋 pbLꤰƽK|zqmaؙQ  h˧گ\ %(I?us\e10|y9VA1NPSm Å=Lbo F Q)x> @#؇0E7A)(L]S;300ohHUtb p, ְKKvн-x$(TMf,,ϋ%.F$9lI}4ѣTŗC`+/`T|S\Ӽu4O'kQ.l6_xfkT%3ðɠ[Ws ո,9rrG!'T|\{: ؍Trc7may JC5 f0bb1pep 7i)A42,t$uA|Os:X$=\ABTLV 'W}~/~}ݗ|30 6&07 ?ߋr9E׺ETQ'|~r>ZhrQ×jXp||:m:^횫:Ʌ_<l.k97W+N}a&@K;s\,/o{(Zb\oɿ_l.[Ha_tD)Ôƕ0EN*Y[ - |xR(9>zaaʬ=<ŀGomu^x3,1>2^q0Qj#TY-(<,btÙt0;S|s:m3g@;GgW6ٜndg.'eCA5e8YT7b.椂)U \YkEIQI)%3ZXsS0&ݧFؘ6t5Pp6(Ϊm -z/.7-}[P-`k U cJ#ؤ)A2rVk/;^T<cn{`r c`*סe}5A.}0xtD>~bv2agnY_w}$ׅ(װ$K7%_h؞ÍE;!n_YqFoKjfn(Bh3ڼ#!-..iQX|l\*4RqXtĿ'.S2`K.-2[낵!2$UpO)Bp jg>NLj9ԫtL"I`?`nu6I@:񩠯^zoj~ySwikCv{$' es}1.EYa#O>}YkZ=ȫ{2fݷvrG2(  S띱VS{3cBZ":>ZS& \}DG s@F\c\5ʍ'og9"O>Y=SA>-L`M Su '@S1]G!/v[o}TMi̚ >Dlf57L8qɖЙႋĿӑBsطѕj>٧\(z0h/٥ngns$AN/r*$jXTE +֥;\.zzE= IMoEܸI,[qJvfr+k4!箔\w W5.uj]6.fnLZвKdM5ʫ7 ځrvy[-nɬƼ̃}-we%Go < ZޒoMwkPom>fJupGnH[+lNJG8XE Db(jZC*0{tD1bӷr>blN=hv+PGp@սjzeGkM4PǼ7Ts:$1f7'Hu(^n!~ wEvetǫ֯ r1"*-1 XE ީTDF&%-h| !\`0x(&1WE4,:Up50UpJ*57"P"#g<:Ge=Xo%=ɻnuE܆ķd;p弃}G6<`W"QxPH&z#8F!1JqP8Ziīe`x1]-X<161OQH.)V)G4,j^>U]u q+/RL WKe(fZ(7#2=h}5%{oUbn?j6 @Qmop⳽R #^;%hGwg0WL37 #]R|CoL 9Ӟ)13.N@))3rjM;;WFeI=>40Ӹ 7>> Kg(I'KE!FߵmHG -xzekŤmh:h !FO2,>W[ !$~M" q;5ѺY|[JR]NwiնIZݯOߺA#\ZE6rzX'Fs7f xe寣}8n%;i{ m//?/~K3{ym.8SB_u~ͷ s *kj|Ⱥ:̆ fw8}7NN󢽿~}^&%M/Ǔ~?q$ϭ-^]3OiqVOe~_/F?OuOλ:w'g?|ک|?r1%NjsBᣟ?:?~K:9;.x~J/rN.0(_X\kIY9Ο=rER4YVCTԷ3^EI i9BmKg sB93'߯ʲm"{e?g_?ܣPVu7_&vUttgy0S{seS~7^Vfw:g?:kpgjy ʊU5d1q {Lgä*1^nYW4;u6K"DH 5=̙>Z?\S qk#"RfW \褳/CM ݋f\!CDJ`~ڳ $e UZJ OۄFg'2(QȘmyR0HVHf1嬵ѫb Cӫv!ƎBj \V=ZRO*pc^yfֆpr.~ E67$(A(Q)Pt'BF9!s=@ֲ/BB*c$ c4H;"@yrJ4APˣ!&-fj}:!9RGt>*T:1N 2ĭ&6 ]k+5ڢzHKs>},R3C˘shծ̌T| xR^[ߗ~6j"S I,o Uݪ, N5MJ^9Woi8"ӣ8ML'vvvV\LҽUK-x|߽~*_i-R/5L%m($IVM#]*zYן[d ~q% e>$ Awv^{hⱼgо!4ﵶaӷ٦m`l4A]۪a5;urs/&9x7d%-Ի7z_Q hq.*'?.OT_xiWݽәO]⤂g1ReQ4nclB.1H<R8ЀÁ U()y $X@cn4J1xZd%ҼwSjw= %FI \ЄJ:nr,;~@ d %a,ƪQ=<0#8X~'} ޹hDI`Ӟ$Ѓ&7 R\m b޶kִ]w[I'I(vq@Gi_qJlV1fX:PQRÑ2 M"Ѵ\)E**E^0ZPR ǘVɕ"7Б 9ܻRjRj=M S2: `W\{ǕJ\d9fN"4c0c5q ŭ5}QAj[Χσ%p)t&MDh!#='i Zh Miu٧bsC]Hd%$}ּ| E!)OJb$a@qu2=ZlVaƵb'? )KqR]O'gUc\~} EiL:g䝗'^%· JgЪVyڣ[J@9Ǡ o0h{y"=4SːJmCN]1{@#MLO 켈"ЦhȹVHն"kVf NP5퉣`lS7 O/:g4x|m΂Vp堁M65ƀ:xye^wd;=}1^yM@u.;{%SIF4h! f2^cy]fM%!(A,%fYߕ0 g*ALH ¹8[yMg)3 $Q+E\J"}o;X4zh; _Rh>ɽ Fd%#rnhS,p||'IZ##E%lP^<ǣik:u8nU]7l{e`uUfW)ɷ4U1?rY7U^' *oDXqx/AnsĄN#cv xa4zTHQ+L. nKćsG#?[D`:ey".qQZ+UO^QBi8بɨF*i 8,YWwo80їey,NJ h ӎ`GhxKԨEL% -[XӡQ9~H.cL =,3FXVj e1GUUM̋:"$X0^df3CC ̭ G AU+ޜR" ,o@a7LzK>{W\b{T\b &$gs!$]ko9+|ٻmE_!7,0,63X`g O[7<[l=,;-KXr;cwYlVOSQp KL"W5iE.8:䙰.^E!"$jI*{ederyB 4K>-ߚ8TK!G@+/L RB= <1QäUO$^]+չ^_TW^`ͰWc7~yj#zQӈr$j8H[Ӥ-;œwfzU#Js3-5糖oB!S\jwyI6-3[i35u|׿yBxfV !|ΪںR8IcVKy=x47}A&=$]ejgҕ7n6KC#q9g)˔ѡ. ρ!*^Y/Iů-+ PR+S*"&lMR'V|)Hh\Hq惷a0a\r:PJ *iY\[@ ZxӮ*>`$Z3,7hyI'miҏY()LGkM(B9!"Sd=6GS15:ܚsYXW+E 'tfcQ,"92,Q{-p/{=Nˌy)J(6M_rJCxYOyoM ΁3E V2CSSN!;eb LlCO]}]fh)DLLQR)%XgH6|O!'3)+1)e,E)@*IqpI5T\ʍ(Eޅ%/:$ICT~-2y1pxRBz¼dv\9~]uF}f "ulO_P^?Ύ0Է_C萴*ƒwB'/u`o+ꅸ ^8M9+:].8;\tih/trr_އ!׶ݭ;u}E4/Z4Ұ$pԜj6?}PR;H'~>7^BO޾~ݫ^}wܜg~|E3Ο LR(<1DI;^m[MS{ ^^"kk|n f{a16$iW7V >6,z֤WWq׃XGHl2aFݒJ]{!6.ͦ_<RP=uѽKc$xiVu@P$$G#NV l|ܢ]4rrwT~\R}_ƽ\Ĭ@ ct2Fm"a9j%I*FȽ䡄 5Jr.Y1,PJ:@k[ƹ|D5q6<~]` "S+.J1!^usZC{ѿvd*Y@ !d (<ʁh=&Ѕ$؍(>~ӟ῵͘U7[bWmQc}=lE(V}mWŹuA ld.8( & ZyE62e>` vP3%l|:[*'PblGM5T/M.*I)I9F͘ 9 ȼ(Rْ2y I`+WҭgK?8Az Δ^뉂.+$kT8)q36-X[Hjq^;t6QHQ*!x/8!$7B.ڏr7HV/h ((h.*%' zm +A2Pi$7E F:! `%`湐Yѡ`"o-Ĺ.>c-UCZ-lt8 4h|ן a~k0M3SZ1^}RW`<y_8)..&kc:xpfTBR"( jͭ xq5:rAKXA(,K{Ksh%́kd5mgv Ux:^tMq6s9=3Ÿ̖!;L /9T\f#K<%d8o|YP b5kn+\Q^/Z<:z}8Og[&]x&\;ܺ=k3_ f^к--޾xyϻ<n].a}q閉 Z4\[:WXsyˡvȺ=xr}zޙ۶-2q936:q5uQhݍmjwۋ{Wfu=l'3hgDdE!Ig4zAs7$ˆ6fÐs'r'@vE6nܰѷwq].ӹ+I$ xi,&r>cQHe@;A=0J*Aɔ)9@*%0A(F &B\oi4 <ʫ5nOmnC Ր\* Lw8@FKChw(]:p *!kJL*GI& $ **q p\u]ԃt!%!9jXӈO$M**cE>Avtr1 ;H}3tBAX?T-6G š*Cۈ(ew ٝFR >ξt1,%,)Hb;2l&/4(M4hK1$Y`}B蛽B k7וg=[X_R/F秕KiFi\64勵+֠kY}u6"Zwre2tt,9;'W#X_cht4f㪠u\wa5(2]Zu~~؆ :Q\Y?%lz݅_iK:+è_fo._X2^ܟᤣ 꺽.ճ4~4Sx_4/%CkaUi~/M-a9ѿ׫T%ɨx76$8b%JtKN<9^]>gg}}={q=W}r\ edGb+x9MWN~;ե;=?~8^=?~PU@Gߜ/éJE |Z_~~_C~Kkēkt~LA=j5Zꈟ;mqqYh~Y5SOG `\XfѨy6.j)_ǓRXkfP7ir~y.zMQK@nٻ'J9 WNyWG]"߷eetv4@?ޔ]ݒ@ %KqI$I"1?;$E 5چkJ[؈,`u"EeTIlCb5aJV^\ގJ+سzT'tbj/(?d2$4wN ?VW> ןD-X.a?ʠ֪м+eNh~KeɲT[? g.ɋE}j.r@U%FA^S;$s7Ih?,c'C^=K=[i#^:lXPwN-&џ /jyvt֝kٲdWߍi׺(WVk}XKe̳s(jz:yϸ<]J˓?Ψ`|F݁ 8[qIK +y^,C+ߋGy]p F &KcLDL =["ֲ΢E^{\ڇd(Eaؐ2L FB%YEaiRrN+~>͛J-OV毸`Z=zL\ 1brJEJr$K2! VF!TVKY[P{bֆY ,ڒCR6xd 4#gǪUCx䒢2x^fpJX2FzTHKM(Rd3B Ή`"?EH +kKݤ. VL"Z)K(1K)W*>$X"Ţ1Uʽc4!X$|K6E L=k(#.ZRrFy)序8h~!k.춎UA^T<4w._*bꁱdrmp־@VX!v89ê%&>M6r\{}EV!YBՖD l$!AfM$S@,LB"6!o)tP#i.#^fP RLB}%{Yu oo1,-ȋ}GL,dztA;z !r<#ɕ!;ߢ)NA8&/+Q7hE6 lhD[쬵]PP};5r4,ʜ-+BLƨZ- B!s3ܥ6@!wTz?>  4$FJEbtdr1[e5J֒/kjEFj>$K@ %4~B*L&x0%{e׊JoB,2ɡd 4l׌z~VeG.y![yp4:Q7EfbO%h^u&mC:$F^2dKH #!b't(50HFf5LWi8sp*)oHa[ܱb{1`'_8bWe Ez](lUhڭDP!0V;H%laIP'3Uc/:%UM|v&9^m;(V|'#v3r#v⚉y(݌;D-P{j? V;Yd" #[lң %% 3E[tTY[!@iu>˗,k5&X$S ;V@LX,F5a3rag/+hT`<Dl"lr@ocaMZG ֊R΀IEE(l 0`16E d8/5zJř)!5AlI+5+0&[g4#g;"U'ٲ6!:qɾpq[9ރKM? >Ac*T93C,%p.p`w싇1YH lKEn ,?V0.q !wS*fGMo:ѨDV71`DΌNbPaJĈ܋; /:=iy%bKAkN ZfS$)9FrPh> e/ڕX|8:/p"$DH||bvF9ed(޷ned.=5&KP,F@5k, XVBEŢ|!Mٕ$ ރORhf]S}@ŅP`Ue-x(]6ّtUAkՎvc6]d:?h3n#p*|![.m-1@9ts6a]-8udIhc`P[0\o7V^rQDp"ɢV@m%PZ8WČ) <ώM2cd 3$5DA)K5D47RlFdI05S2˛'G9E#3I`ReR֔W)HAZ ,[%5%w| AΞT˫ڪR (i,e keJps(Jl*MĒojU /T1*( .gU,<8Htc9kFΎrIU3j[OGC*W `EMl4m!4 s,$$+T]/!RAZ/{tl!E0U+m9_>34 uEbo6~Ggt&(ߚ0oy>J%[* Ԗ~B+nVK{]@ER%Y_3ʢb^9^S ;G2^L~b[yWW?¯7Gx98;^y|ljk~u. 5T2#$KV.*2tNl.ndooJ6^/Ą7L) o>H`쯤 cg.%/_˅}ڭݧ?74ra5fّ0ԩe$΀@?e -b"uՐ *P4f$)iOuQzM*hLI\D6G.Y+Q[%qXrWfL>zn8u~[Sޱ始iL登P7?>y5t-k٭⊍6!+j=)@ cN$AZ΃i6RkO511G!l#lˋ*Iǻ5?PM 6wL7]!vpgb.+w=ZVz^+ATi4J0LT-h<QBfgE"BʾdTh 3r*3 yp1l&Ύǎ/_#iɞ\{ݼuo>cVS>޻t`DsTjBrGCBL[u݅R!HEijс.w&;򔝎/0Bd"hg*QNl&v ^˳~;_^^M)}'k__tʅJ!LX~*׫X/dfL,;:Tl+K H0C%P^#=8r)P\(=r\ԃEM{1E1kxm>WfNc@'G!*`9Ubbbd*W&8JzV2dGWkCB;!qBJ:\*ÓUfqILr{\nU~PW,}2pW}+gꇁ+34zts}v^]鰏&"/K6E_ԌbNy"5] pֻ~|6WǼM>]]Ջ;gsUAe|>_fiPH>L/M|V>LZ'?$_>;`?\U NOw}*%W Y^^M 1)aŻף+Ӊ2} vFB򵐣k޾/g&~<8stбEɂj6sCKyJ ltNNqTfg҂ _Dc7](.^ Sa,y^֑Ŧn6TTJ]`=F{J1nK%+ t* kkjӺ6uǗy/jnF{[P#)rY =sK桵bjꥅHJM( hv5A VNJPB'X MnBcI R@GB[QM"$Ü| No5~I]Mdc R%/) 9kqIb!#O:6#X7;-K6Zɠ}~~0J?![ I+aIpxH&r^$8hHpwAH׋Y콻TbǺ5-DRBw!Q l%oi癝~w]`=(Tr/G (NI{BAAW~h,hSI {I pY"C k\2LQB&9]fBIfNeG}-mho}UçOh^{m߼hk}cҬoh;[m۲9q9iP'gt>ͼ?YB+*B(9wYxe#ٿ0k!`Ǵ^6h8$S j{_˔`(K]bVk7V?ub3qvXڣpI8嬰PQP{ܰٻ6,W2XlJǽ`,Nf$aii, Iɏ|II%{Ωunj(SDY%94)B7'PGsFrhP{;Nm{.=?l?'j6gsB2+ЈLh./tLdbU;v{F\QߐR. H'm6qc-p[ljީZKn&ﮚq2|w]YQ76G%h=R jzhJ S,DU uȒӻ@.p\6Pb~d&M(=wMX)0,<@`BQ2YB:hB]=jIϓzDv8($DtX6 lf.M͇Qs>hgݙ94;:R (2^g&SsQb  BX%.@(NY6Q8C. И\xI^:A(PI^ STxjҽ:͋e4&kqp.5@-/-X ؾpjT` h"M+bI 5F:> 1# QHۍ3ɓ:kM>!6a򦮠 1\J/mjh 3^gC!][72n].v[ҵU1q6_eQe3}ʯ2开_m•h0N:T@ꗟT~> {uxcא56EU+h_e;{>Q7π"ϓkbw]h90bLfkƠ-/D}iﴴnӻHs]E 芣_DdV&# cdPIkP6$] %l 8YT2NEHUYXXvΪsKQT.}{wOklt|1ėnfoZC+;%\#n.K_J9eNh+ZijaLaM]~5"(6ԃX yo-qBQڒ0FJZyry!Uf^Idkgr€bA`3gP w,\N~IFSLֲHZ˜¬QfJD$$An+pt|$*\*]JX-'/5)& ZjWeVAV ={!nfv C*s nЩT D5ˡD:G,uaG'n;nAۧR4 o:@SG0J-h_;].V1I8``<ȂuGS1d'cWfr"uQ:UL*%]:GdOA;QiQ~ygZ﬷1Zm;C >KSJi0#rKSSɱxmScq7G1>%,pp@mtR]K2c2%|bE/ޑuH]ez(u%P*vI;NN%P#(5.70J[]]'Qf+:>#oysg85 /[K 5 j~-o#?H&fS\.P=@DΏO3Ͱ', i˽3h{~<ΧxB^>qlvm%O@VU GsS@iۚ*̢q<&9Z}!E}SNЮNϟ3"k{9mv:8؟/Ɲ[EEΟeRZZ\4m;z|YYKRTꦶX-zH-|:qnƾp?x(5dLv<^n_h8:?|~}oӌIU"gS޻ ݫ>}E{W*ʛ,ܢh]z(xMPe-=)u 6?~~8O%/r9k#>׃ԤW構qVR+$ij+ bٌڍB <c݁jFIAg Z3ZUS (Z '$&%!}~Jϑ:_lydRzC@W $:;q^2/%-0/" Q~O=ЙMlqvj/xjwy>&'QAq3T`fΧª/)Dz!-toW/` X(`}zW"'''g]@(*ٺMQ9GcVI=k}iqful;9Yw|TϽ*{:[+nm3| [(DM?~#'<%3iby`xAG)d)Q,,%EQa(LSp!RO"[r t^(0OM;qTB*>*s1b+ t< 6hU:Agչy(Kiw ؤkdg7>W1ѓӳ%<\R9`} XNb¶y{*e/mLൺ<൶(v=x6}o0Bs|yll4N%ѳȓփ 1͘f*GKlj i_TMwmR%9k^%)KP$(_ J$ <3Ct:zd:W[`mKZ UR`B<UX-zg@<(豉hͭഐbYUcaQ  Q(G[颐K {Y÷co@\Y`[~wxr>{{SjwF L&] M1=/n1AUs Wf6_!\%|Z*a (JU5G* eѝMaSgـy} =v@=v6?l>Yfڴgs2idWfh6-4s rc3y nv} rӰr[u╇4WcյؤյA H Je|bYsթ֏ hxV6>򑆵'sR4(k<޶6mR!9~L6OuRGWv"I ~7LԈrF5͍!ONcvOw;ߓ-}MN'2#ǵ&VycY:ls:$1f7'H'| x"+X0?hf~Z~Q{OROw &Yp Č&b,*fN9#fBQi6ʇ3£0BJ{bcF1`k id&G NI%NnpAq-A@vDv#^t#m9ov.T&v/Su )Tw@͈z@{b^F}B!s*B+)ChM!^(@=Cqa<+Nm XV(Ow\[|lR,. bsB{kxV槶dODHע^x\ ^ϳ!F$P*Lk2zWdJhg >K Rő]`PN;CWȺBWstt(I=SEOW/FU,CW T]VS+@{EҕT. "Ww&e1UX:]%t )!u'`ז* PЕZ"!']%3+@ WCWzǮg! $ϭ]>~h%gEWz=]Bl0LNQYaFގ,T<&3hT?gQhJݸ9Y%YȸBQ!o6ހ\ R 0ë;Բv2`2k9j!atX R+kbKELJs&\!ҡ33g+TWτ})ϟRcϟ/pĚhF]_^a\d+Q4]^dˠ|40!~ٟ~}lZyh!tXbB)[}{MM;4]nb]VSRfKi0W&3t Jh;uJ(+!uGJp ]% =]@T %U۝5/@ {tBٽD 3thW Jzzt%g\v`c' EW -NW %=]@RLj:DW 1B3|GB)z%ҕX?^cJs ]%3UBO(1=]}-tEn]O_?;]<=>3]F%E@t5/A]/E?Ŷxb< m ꋯy[Ƣc\|űi\lV{4Up:΅sc̱|6 {QDu4Gr )np09sH[G‰uX1Kɓ f .,5qǽL-pV,iz),ϔՎ ȔG 4R[%mG [.j:Rjsfȵ1OaLճsv}mY H(򹖜N)bxЄ~8`00S3vX;cZa %E0$)=(p` FJݴMq&XK2L;.Kp9 M'R:M'Ğ_MS XevZ|tP +&]2.]Rў^"]ijiچCM̴Hb Kշ,/Λ4~%5"2w6%CN #^9Q 4x8 sr;\9Bo 38O?՚φGvE7_t1Nx$rjq}7@ffuΥin$s!F1+tym1{a#V#gGLy#6!gd ALmUV] oS9%|=8!ZB;z,~f?g^Q]n)1V9?a 3|F ʥ8ϠP&>˲K{說ny:85dO% b%-*F^v{^_4wpaQr,w-_&_W-Roh͓m<`3|M*T;mM]BzҺ6 W;]VQRU ,&՞c:FY^1HgEZZYiޮB5IkKn\#x֗ϳilkOE03zbL۪|35|0%i+r3v Ӝ GKn"'Jќdsk,QV8dc:>Q|t־BV` ZOm J1 Ϥ1hb$`ԃ^a6j' k<`ZQRn&x92J#"(aREbHUgsGۢe-Lm漣ɠ MfY"3]g/5Lv:Jx^dNf#W07n.] hGֈMݿmMizPZL:}UgR}źĆj _3 dڃE˘G\m,+_y.96z#fҚzEh"sZ^j KgMT]6mlmsai_GĜxlTXN0^@ t"Hi`GhI#F,PT9DDꥦ)%-eLDe58%[Oqd:r{B*"#ɭ488|j=]8km i138/Q``!(2Y ƒE*3DO!FL3>pHbѰh;Jaظ <]<)ȇqߞPp$iޏߝ×z9- H PE  #?A`$u7+P>)"'tr%ah7' $ tVݫ*XWE-6`Mr,r)-]Rf^& Y\\,W ԓ8nzSE^+l89]aޘkD۩ Y (xik>r ʴ"w띪ֽy: &坷ōuu*0[C/Q˹z9+Ƒh<~>+`^-&wF/B #2ڑ_?Ұi8TqUf9Eh\'4i7D1l)G%hK62a1jN#CFȇ>Co{IlǕ^Щ?*o\r 7Go^~&}_~|{ysWo_Ž7Yp)άM‡G@3ǫ0nkh*ДՄ0.Cmrøq e[n(}?r_sn^#YЦs q?$GGlXEZgWuQPx %]P冾!\|!GAqS)o,%4eFhA:>lH2LY`\I9 1)SԁҘEA"w&EBƆZe O՞g:Yc}d^/"haFK[ΑQx$Y6лݙΞg:y'dZ[va^lU{%$d=yx祓Xy~/CMjv H}zsZ|gߜ$pZ/RNq@a8`h"ropoRМ-+|L9DXV2^Bc(sH'9A“\.\к0Q5t b:e@ZQ^dI$dtO[ AYdY¤c\f{Qf0Mp"TjU&7HpL`% >: &4l8G6O[81Kg3G ]JC# @^3,^IK25ǜVM cb/A OB!:a2 mt U Z2(:o;cƀ1FM ih^FN .VKC)Bi$% mBJ/q4ABrd _w<vek$_Pg7ȍ'7y"؞Ǔ>l: ]Ey>=ͧޠRuSx4mNT/8+ pbr:L!2%ˬ(bۇ-:Ehz?<\i& ǣR"9OrGN u=s.(} )mC>k3+8  ҕ>#BYNڠ\ @a{ n ;+[{pL:G _&fӛɳ=qπ&Kg{B nH.v.&a+۽4Yr}Kc+qJ{;<(^=$fsש׏ ۨ4,:Yzs0|K6[bvlB ZfΫ$J;r3?L w7>.yRnVR ([o^fu]=r \vSZۮoCYڰmqEC2a:K1%*?Kv^U}Ovດw] ~ڷ3zeGkM4PǼ7s:$1f7'v=.UbS:l3QͩˢTuͨa c3柮&Yp Č&`13LxRFzgD!v !rƝ` 3I,E*L*8%:y 9x8Yo; ^3ͦI?nW"Iv@fk+ĂJD4 # )S҄Xo(D4^)N G+mr*DO;g<]N4N$i#& BE;A[[/) D<FtnWJZ'8NWB[, >4P[, Q*vDibR+nշ2,bކ kvWpH`hQEmkv}gw"[Km񴀫Fٟ\۔P@\c 4A^a ^Az &.Nۨ2v#u`tx+ `-RTQ*F[\'iYu)v{]7%.-zgg<Ʋxw7+O`ޛ@[3Тph1ɻ"-ϧhKǽ~:P}^OGqtN+n@~5yMH&9 SƔѺ?+x2IYwt%vuB,9[eV7֢f; 8qoyq-z}z_.Xl/ŽΜhA4)* u;i^Ϗ ʆcUQxէa<,.nONs^ܸ\ iJRgυ.|eyy履޽^G &EfugJm@ |x6v,\L~gi^?~_dV l'~=Bwݓ_GwZ<Ӭԍxdu/ͪ7o 9_$ng7&?d~T[3nV+7T|B~v %}Z_LRϗӝz%Z =1侧f`FCBʌsA/#T|sSi35LL$Tj;;uۍC9 0?UKmn\5<0IUL0A; ݣs}=/٤BJjSw`i$'"ڰBtV},[i_nxJQF-xØIRyK,uYRk2얤~J4bΏ̻{Z:j/ x8 ,YYqj ֲbDyB}@pqmaIJ! &3b19Lā.^eGU5سHj6GT(n(J)~kaCAUҬ{7~8Ui!șQύv,njIi3-L!ZF%9!SEAC1:{73[xC^8a! F"'FCF=.J˙SGVH:pQbVe1|T}ͩpZ8Y=%&p1/R3v#k#\gt*;yX͉7v x\ b[0}}7#~!̧q û׏VR[gyzj%D@(K>:+[,TjɳRe{`Be.Mmh)#[BŴ<}lbw-dCxRbUw~̯zCw{6<3OGyx4zWp-/dҷJ̓pceyx;8*{nEg^AĂ&1k4@R*v[P%!fHM*+$#Z{1[,} *۲b9;#כ_sFU5m29-e|+AZĐZQckc6NL)lLE(P_0ev:HEK ypB l|М { V'O N EMZ7 Ę N9ťoW12#XP1g&j[2FvViM_{' wE_D 7 7w,L䮲:Af~>K("0%V?{ѣ!r ˵Eq@;m?]b~־v?,WXױ rwɇ924`ۂ FU1bnjBzI`@#,`wJHoCbAcSB6)y*pHhHRPTwG<GaN'F-=vl7[ڛWP0XPWs D j\p"Bܮf#[*<*69$=˗2C XS+JJJsd +82&x8;0n """0!℈cQ&J̮w(slUz'rh؄ѭ1>%~Q1(3il\Y3„L*) UɶJI#UCSq4qFUuR\<_v`ڦ\hZ).ȸ.NQg)k<(SK -XO\Bb5\Up)pqkp,xa' ^J; mC.(浏kMՏSK^x+½B`kj 1C0 "* 0i!QBDHH"SZ<;d4}Ju,:'b5bA߉* "K.~Ɇ-c9#!QtP"AۙwXŪS: V3+Hڨړ*@V F }O7^&"a}7x/?)c ȹ`WI&)cd6)9Z^cđS/r멸_OvW RS밵V0VK9VT"­YvEhj{aFPj i}؜&x51ؑl4qִ]Mf#>YR 5DDZ]Ova~X[rْ*xU4'})5p?Q-G(sJ,19lRќuGXDO^YDƣZh83&UPȀS1"jE1j6Fd3QC[*0(U`FӉsEfk-} FզK(2/144e7ddQ<3+2U GӐ Fn %&Aߞwǚyz0?dXn&͉D?Y$L&Tl❵dsaXoa>Mj~3,u&LBa9aŁ)C`GCj@njD~e-|6 _ zu.aߗT˼]Ğ`эجwVAZEVQ[,{ >͟z]>y{T怞{j=A=]qW/uN^f?7ysٻ|գ|6?9>ycuu.zvfY>}bnfhW]̷mhؕBݠ])F7AFuz&J>dZf;3{ove2J}2JiL'Ӳxd-wû7*xF;)>`ZtIcL(=1) p"+`:).J>U(5=I4B\)P N ]،a Ւ-HVL>:?8/a דw /Պngoov$Y8[-4\P5gQX\..P*# e2qwl~!äW3_NZQJuV&0\åo[{픎ytL`}a1XXUC;X89NOs:!‰e'(qa4'/B|`e_kqP Jp4",fr>d#$K"V$k!DR\|A'5(F`FtۨFղ4s4q{y`g''mU{;k_/Q׃).+y"K1A|RR٢_'dcˁAbbA(0EOI>U LM>~SݡO[sDaߗE}w8#f1{xmҾPfveş)q1I\z 4ޖ0ixhz1Ap1!5+l%z|?:o=}(FHC)y-c V}ꍑ:StFb*T= GlNģ< n˒цNΧ`-K"7.(V,9plta+2+eQBiLv AQCR)&`̀]zQX @1Q 陡cJdO6GcVX(41$ Y!A"RKjG6T m@Ѧf jܪٷȖ,IXr2%N23=U"R\!栦it*Ӽ:Ҳ 6i-1B }o4pп4]lKsd&~!cXzfJ=Sbuڡ&}i"zD@ekJ)`&/@!3pzûW]ZW]J\}1pk>zx?k?wk}vK:?g=M*uI1r̢ ~vr\7\ {\=o, k@/9_ n_z׳7buysw _lTh?fzW+g&pg$x^iZ|nA“+pwT\F+pե%vR:7K\+ RG5 U浾ǿϾ<'/Pifܾ矾zvvrqyhѻyg7?C`ɿv@}? ן{%kwvAv1vRډUD,UdgઋUͶUR&zpEy ,3p\Nr캔l'zpC`N~%uxvRߜ ;zI9Qyxߖ`%_7@` d"C"94UJ=ZOLؙG^,XU0S/_?~W7eʵ(U9~Uoxdʵ5C|˕}O|MRik^p"o1|Ͽ''W7yWm`>|K\W4׻o_0ꮿ;{Eԗ}'w^γފB2_q^I}Gr D!K 9l&D0.b>Őce3}7^wˣk]yu{#.[|{={o}=:fdgj~eks 2c/_j\.دezˆ=ᬎ#m!s"qHv. ݢ6LW͊$1W 0I9,Js}+;fJ& .fϳ zV'm*AZhxD9j+T{U؜b= ]|BTЬ/W=-+oF P&\c"jTC޺nL=[!ܧ:_9iƺ݃ Nj)}`LZ-z)%[ȓ?O)9EPXm$jYu20OfOX`9~񟳃x\I>~uT\ã3r:}=.' }Oy_ ]{s7*o<Wǹ*MmfU*<%!á).k̃$IQCRfkcI34@Ho!L93)(vt}T@^wO+0! :Y.%|[A.Z0nCwZ& :@'al|OcLL)D}UWJׯgG\#JAMw&(2+6 ]>qG.`Tu .O7TTœ;YQ\|rv^?x Vsb.0y5T/vpd/geַ~+PKK֖_E[3|e3O0i.>G@6ٚ)[[%h}M'Zm+cJyEF5>ߎe8Hb Ww*F^SlWn89v wo]|7ާӷ_=}q#0 6600 ??1M뮚b[4My|v-]>Z.ڭb[n(=xy& *VH,vMЈ p%qo>a_ ]]EIշ]R(r. iD[]OdM4q/ l#Q0eq%Ĥ LQJclG!a>{ÆFe=\GE,f^/"haFK[Α1e@Q;".'+8w;$fe[t:[zk{ջBvoLxaE* |ufٕפ j:Rj3fȴ1#HkS0_eup};giHvζhQ:~=s[cR;ĬW,Gjd$eZz\Tkb bdn` L*1=Ò_|)m+ hhy3(6aG GͯaK)ۦ_$ڿIUehmʲ|X\hoH뉐jٰk BkQV9ee[Ƣc\gg8ug1tfY$XLjAEW7]zXВVQ? U 4&՞k:FY^k V7X.ǽ}O~ں=J^-dٍa[v[={")EjoP5;Nfۊ̌`\4cw&gD)1lf%Y{\\'0罭}PGp4YwD"3)#wbep  %(+UFQ풑?`0XvzH9DŽRn&x92J#"(*kZ )$t|9ooK;';vimdz8s2[GʤeQgbSGhK8z@xV1`OB Wg Ռz 00pmOL MawDeN@]7y$^ b.?]]XA LF3&4l2CҚzm?Z=X-XFlOQQv{s1Q!b9a$˨@g P"Hi`ː8@S0b EcD2%R&R/5eDDLH!+Cʘxۢ*mgli>e\]F:yj?Oj7E=3&0-Ncm"E*-/I&j}N|Ttc"{&CDdT।BDi)Xh6sټ稫=Q Xiu,aJǭxІ{JEgt@sf5k=\Gk-QJUƝVˌe-Ln7('X)UJi`2*0}QvOa!E zQ0غ(-gNa XI#R3g2:kA3W-BbmwpuUP(BRG{KzMJ]"f.xdmL驄3:c,(o`' 𠼥;-}[Sz'R({HsP άl4>.ƙ'ɨKts'3f 36Sr=^p?*$QB. ӈIYۉv<ŎVkFAi c9<Ŭ %1q^IqZҹ L~6(q_﷨p7-/,\$9M+Kz!ziink"גV 2 B61IY_u?(X.JgUpņJe64Ͳة]7wj^~捚 t<^S_v<D-z醊nj! z&s՜57E͐xwQ1O?mmeb%ҁ9LD'mJI+ZI9Gv"In(ejC,eПmx}v{ⷼfGN?ݟ3zeGkM4PǼ7s:#dƘ xnpR"U> ^B\[^)<47#1+tym1{a#( 㻘;mˤb!"XXK@ׁQD4 HQEmqT39oN;ϲiM;1^RyҨ'ea4Y}ƣ^U2:y0];{(Q|}\rL6T<6eW GD@7 #]Rw$#a~uYpVdͨܚ_$VUnA%"e/,uí&XW׼Ƿ6r& 3ɑD̴zPb’d~7㒲ҰLyV Wj& ڣ`y []BES`yh۟Nl M7bC3äX,.,ZM2RwOlƣ37Y}wU/bVIsS.uö4~w՗En5*M/̊Kcswp: dvץ*obdp.&'ާ\ߚzN-?_BG/ZyKV}_ w\.?VדB+E; _ԩU<ٷew|NfG ܖ |OY᫆w/~~d??O *Gŝ@Bڍ. U |IBi&%a ;`y!ԇї/v`t.4QaiWMy9n̈́>ܽYu݂/N Љ) 𻼨< Fi6~@Y3Aim=>]szZJ$/IkP?˦>ڮ ѠOY_ɓA[s(eI]˵:KT|{w*ZoT!O![;>p{ 5=2uN̷r]bN3s%@b4@Drscح Oݺ< LO˫1l]p$N'am庹+2(#h-Mjt:2[6Uܛw=}_:MDtqz'>Ƽ 3FS-ZY8"8Faa QYH0jq`vdMсE؟@x&,n$rmu&LYD1222rFhej3r]Ef0SیN~Q)9N!M-Vknw-h$WOz 2WE H ,.>ed,B&bZ >TVId@$EXid*O5zX! w cv^E֍e9[vK IRK $c4PN& i-E65H"m'Vx$LvIa7¡|[Mu]X4&8Ib IJG" ,|HD] x҅ͽ4,*w14Z!X$\¦(9L&Q##@5.ycʄl(c!dY6#4T e:ۺo1>o˔E*2!{b8}uK^Ք_5>Ng_Eb|< \Q2ןE~ԮZ>Jͫz¦ZQ)B1e.}!R ^#g}>ZvrIP/z1Ru#MB^mvX*tskk ſY D:8PL ji9g׊z@߷9wd(8f ?|:fT+ߟ*a&[:Le+jDҵ X#M"f$ŖZ%jע+9FsQXP„#@KQҪ ѡjEk3L6㌧^5[_Tdmqr/:޾ pr2|rm'jF>n7f?(QrP>Wf}v=YPD;m;u0tq1q?i6&E`{=^\5O>\ f)/']zf1l6o W%m;=`>u- r[& N(ɐBa[$x A1%Y$0"FФ˥~ZWSr nwԯ\o}Tt&r >cT kU."$k*oBtB\&Os >],f cUEǺw.gges ̱+lۦ\ Tnz 4Bա@a-зB;f :KmjCډgz S" a4XFkKBuBy؃i\tɲKږh" 8 %|)oj]hՌ-;$ewqa78)W\GxArt}{S|9rEfKIdl(ZHY8 s"zilȰ&glxfw# H1H)RJ 2(f[)BDm JPFb׵P4x@r eSbP ,HHJY3r^NbT>=X<[ dRt&hMpP֐Sq,$PFXC/>jQ(-EKE>0'ٲw NZVJ;msxTF1FAFF6ZF]#j'~' :y81];'Y`\xb~TMǜ/S܊'|?jB,lcʫD6{46f*x- :G&p6Xs]DiLf,5[B<ЊXK ET۽FGQ'vKIlp|:Q<ډ Sz3Q:d>R< żN67Kn㷧JHMlQ lRqxwK|.Z>tK3Uykgg'-fL1v:D3K y-3Ń,xNs?nKvR#14xxaoCU7M݆{։B748ɽB4o4N'|]|Yg!яO"z?{ӂozwϧ0V\\糇LIfdk f+kzŌ^Ӕ0 έ!Uۺo:*qQr}4=3:ZK^#?. /1;}ξ< "Û:bF#W}Ϗ։R9EO)waXur֛kQ`L&|$|_Hd_AMB^`]>"凕s G^N^/~۸_.ϼ0nzQEFtP/^ZWDjoތ2[vq8J~Mll9sWkܛx͆hibϏъ!fb).*F{w%7ꉭo vជ@tF>D@G!4Qe0Izi#c)g,39iâ4rRPfcȹ?:qV:ݻ[z0zM?>Z9/Ն_]wo^+}KlVl%5%RҤlY K(mwl~˰o4v҆Tܳ[5*Fћ{2P-v (˩?mR":>te˳[n$Bw訋lg4EZOY9 |'ie~@eQY㜿Zk(&%_dre򄽵A?09- G8-%_?vr O)w+[><@b$KRkwSXaf9ո֩NڂJ#fmJpҔ1xK^J}HޤXĂ PdU`*INlF|+x6M)sOV_gS,m+F0 m3N'_ɽ2dmZtCg*Qq-_=Xwa;`u0r9z FX^<@]v~\Zޕ_dNsщo+K^ycϓ)5Ss+vLz%[K%$?_{_Tk~?%_%|\԰響j*G-r߈yyZrXt&j]6{?wGJk.={HE(-Dq&\ˢ{Vrm_́~~QfIݙxR2.k_WA8ؒ-fl Nޏ{.ɜJC&`JH"l5B,]ʠɸ)eh;8JSO 2;#^ ;h Qh :RZ;d4"q$Dvfhp>ޮ{400ub?/ooMsL3ᢑ| #$<+>>qeXvMW7 q]qUZŶMW)ڻMK;@ow]Uqwi9wH(.+,wUŕ;CzIpr]}? /?y=ӯҎ˫|?l.K &yk#ǿv={^tQ?Bc79__稞UrX ; |gXLVmUJ%{/bU6ٝ~v:dombnI^ i}d9JWFkt;epRM,9Ƶg/Nⴝ nyB<+T!4‚1Rw2lXֈSެMնQ1.:%_ Xtlan FKO}T𗙸jL!XJ2 C9T#o[5F셪m$K+xGi (3 R`uELQQr4\E.}5oͫ]}8k+!\QӽVrZ]{)|r|&IAg܁LUcqIb*/IKI:U=Au߀1/Vo>\|:})RkM_'[],-W]ᮚ eujZlgSdoȒoB dfs<%Y,NnQMqWM~nS+1+b/XiO)Dˀ<71LĈ92H!`z.stRY=}\$ ` :e8`x$I{gMb9[EۜW.tVI/F~\t&]W~5Nw< Ӳ|=-@8Bg(fEzL eLR&-Gl/:~1#;wt b$,EatFJH `|]r"5XD0`H? BcPPyp!ƚ:mE|) J38S2)fJkRd*UԿfKu QDi2 lB̭Pc2@M _C:% y!K,9¹9F(kcފ*\i~=IUbbytY((򓏞ץW|OG:K=|^Wǎ4<~䇳O-YJhD'L8y,oaL~ȃ:N}p.mz5 ܞI&FRxy.0Ϩ uA!q;xo\?p`txq]3ym=AJtX>:e"3-|[!M1ngdRGjPތ3RrSNoHv^O~􁔵>k߷db2K}{0~{]ZÏ?ƯܾtE`B'5 s? J7ۅőrM{7<|MuDD-ZEW3e+YT0ڔQQrxy:Уygg7tJ٪`:dW]}%ZZ<ڰMteTqr_,t+񩭃Бq/GhIoeGLʚтNNZǩhBl+"~.i?۸ oc((Wa"J6'i_fY)hb<(|):1C,l37WT|ا3K lUŽ{ctنv>gggGzg:vcVXBB;Zklpj [䝆 k./.N5lm`0*Ngj X`S-hv\u.@tv*fZ1|J\yd2yw7BcG07 e/D%O"piBAj*z#΁Sf"k$j}w|s+:kmv9i*.y-1NjN-Dʒay%E21蠸gJWlj,6K\0CA$! %!$z\Nai ?XӉ+!IzuVJKr&,>592,w9'̒ )b2'\əeIܗNmНbhtNS9^^\tԦ(93DQ"s%ʆʜ|f8jIOy ]zLX YTc#CDG"E+1xϴgFs(39O1Kʩ<(tA0C}piy$ m  #"Z(fL4JIJDW'g" fsa-8 -#6Qgp#1G: !\̬>ruhyt+mC u^OåHg~S^*rr(Ebs,#Df,+ZǃGk3DxlaJpgqU08x$oS[GXGc-,_zLHwI@-eOa8JRX\QZbx#Ɛ+Lgfj^|&,{ L0ZĴO2@ @#/!D :2h]Rqdyx*Vi#x>%[S]rg` c4ւZ` ~U݈*Q33Mj`DSIAW:աDcN} y,1i`p% IX^68V[5" $Ɨym$Gh"*^y"EzDY)DK0,xZ-6gc9gȀgZm{D n"l%Y!$*cpQ\6,VhsD> ']):3T/3 d2 .z-*]xTuc5FL0*E j--&ЃBM?@1¨ -"@$ u Ksԯ=zx`]ko#Ǖ+~JvG~ A` $hH-)= TNCy.@vټ]}=:tnej7!ĭX b>clHTAԏU_~(yYQl2JRp P}xmt;u2u1qWCj,׶X=#2=KnCz=pi^t9 XMހwK5ʪ+x B fo ^lp!xUBG\N&kc[`pAs> ]ƪ9ƌ:`xbhdr[<8h 6eGhKX6$8KmF|@@-;#RF W{k\pYT2,! c#H>K45kN/tc,o;]GYZLkTRp@Y -UJUΪr^eV{уEXYw+]nAH6t`_k.mPbHu[`&69Hcy[]`tZL2k_6r1[Iq@0b+B7]pDL[-SDSpRXfq׺`XC]kpBnd`as7C0uS\=w 3| g&%W@WN"0ƒu/x, ѢaT( ȈOZwz4hep?{-aQ=+NFp Ĝ"˦ݙ0s#S#Rcu]Y|r5X@0SLGjJҸh Z`5cjyѶB{}(18`ڀ@nc N5I*'3c̀=`eߟFĔn,wr|d8]g&OkstِCLXKo&b] cd0+MjSI`$"nŒ \N&a.\רvEC@Tv70&L0j]zD?Nlgۭz]`Y!> C⩅Y|9}w(]PL 116ԕ>*~ne='?kf}毽{I˅Sls}1Vʳ/6o,TSXl{k||3L kXX]1[&f7,:m]b~ՖUe[Ƣu|W:s?g=[#枏yqF6 /a{{ܻקd9Ε;M;=twСIlÐ4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs4wHs1>4w`Q/GsZb4w 8;{bP5wBL]sa%svהxcE.mX;$%^&ZHyx$;o{F@;.?Mn.#fÛgͨv)kzǠPvh",L* NDK*bVdXrM&Yu]zЎCHqhFYd=?'qv}M.A.'iYSG"ߪKAD{j+rr~eb1U RPVel9gA"tG+7&` `\TꚂ)y:W+YgGZGI Ob zvniڢ%xnQH/ɧ#4f?=};pae5칩3}o|y_xRȖ\q̕hⅈ9YiXF"Rm1Wg'ڐGm &sZE3Z c9-L -vcC IAQof7 \odx :%JW1@0UԊ/X?9X[~Q *$t?Ftb3 wn-;R7wnZbkKֻb] bLcYeno&#'778?0)~5[竖s;K7c}u1ހ;|6<X^]+~3\ͣ0>º2/R4K͘,Ue4q<m($ xg=u_BO=qv?}X\b0]5_lZ.{yy~u+Y^-m9kA_ۯKgZ>ɼlJnf ~ [gg,\eiΦAF+XfE[ƗþJFl{s| 7Vo'T/cwӶ؏nKF1w6yX"ɓC>b?8/<1 Z/k}|ƋmtVywg7+7$-vg^<,4|39g<_Se9E^'Mr630 vWlŠnnhΝmNp3߆G{u3꽳ncsDǧ|]] 5:}>h}O[+[c7vkNݭYzDlzٴ>Wf{ټAlLcP9rېw ^2Ƥ-S6c>;-c%q%\2I)]c 1VX><0ELI?u{o'$RDH}O"I>'$RDH}O"I>'$RDH}O"I>'$RDH}O"I>'$RDH}O"I>'$~}C$:^NDB^LDsg't5It2{w!]aW;Ҥd6,s&"dȒW,6: ; qC=69 2MOY S>Ssl3sԎ'U3n %7&FDNjT:gSm/Aqy pgUQXqDISԠ._ۘzF3j*-=] 䶖:%# dy nS,7o\DZG]&yIE"R̝ϟ>Xo[q n_2{DDw5~߸Ma{Rf%A!B\Zլ?)ܗKSȓlߧ[c+ q0 ;֢݊[Ke,Z\ۧGAp9@q(81ƺ}wVںb㭏K*Pf>V.)Һ^ߕ}9fǰ1t}Pe,|%Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\Ʌ\&f9F }̿f/E. ]. Vk\W(f Ǘ?CKō Y l:[_)I)np<%C+CuT~RhR[S~-H.xZhZ.^[:&OIWgY2η6+o<`XtA\:$^H`j_]'j49'jѨa]CLÐf7EY,;UO1l J~v a\1bO_? ^PT~uu6ъV p2lչ$diE\>y RTJ#7#2v302Uaa78,ϊ{k&-?"<"^/v򄡉t=e~aĮ icVcꝰ*]jrnaE8`XvONVahd/ddBJUg[U(_GTj[9q6[\PwڒE)- =gmfq9+8κNƥѭ$\;U66._Ԉ jɠX`M.,QK#0 BU*I 3v30~&7 "vSwFDNH "8{6ijM2 NKE` v˘\{C_y=u{s*xV:yɩ(: \$\֙@;[1 <(k48QS ,QiaBtqρgTeU6UI^2IM(V$[߷d-^Jr}p9{-go߬oQ<= 'On0$%:oR]\`ueEL͗\urg=* OV'9X1g/- JPԬsϘ?.OOҟ[Quo;##9I4LpEb*[JQ>x`i[ӽ~]](S.6}ܨSV."Ǩ瘞nL '|iV[#xF[ Ύ 5Y @"0F#mɄ)ZKAVG8|#o1'a{^V5 _5PmW1\ynTɩ8V}PQDb⨣ڌsPBĸ;N41FL&u.KJ]-akiEE3aPcPYcُo':~`0"}yw=pHE88h- !r N2W7}z7;wn~z3ܲP8J6⠲FCzÿG:Bb!UMϩIEtw6Cܜ*Ok3{|zXʰ腫}ծjCrFG~yu,By?6Vr +%'hNkwp"'t3ĢqȃwF H>y ESDiZv;|K[K]}<~lw?oR9˫Wln|F_rOaj=@v$_jl.V^mC%ﮰ?ޗOa7r*J}9;?Uu7GG~~ modz_?m)еu&T Qv1\S-=qVCx:sìqtu3QhmV%B -ޤ}˿7 h"PXt jࠗJV]JeϛD IED@G!T(pNl&饍Ō:j gb37&K{ Lg?,8翟D9߄Koiŋ{^%_&i"hׂq;z.jJjr&eo\DckJΈףYnG.;ԙ={y@vfgo~ =kt9φfpA`G5i,3)@o}33 g _,ҦVFE`#(q ! atRit!QΛ3Gņlpv 㼌!qq?IG>!v'A|9`̿[x>\y͆:|Ig{_,佌ɖR z@4荕fr qUF&ڂJ#fmMœ wsy܂Y/ϯmQw5k_hه}ֳY.YZWni2|wUvÓhba A{=8|!,'b((0,UʚA-BG R΍D98r%K.֒.A3FEc|)4wlp^̡Ck6y;DHȷ8):V%7INQf[")IP=)e &Ή襱ɋR nrYfn,uؖ;|Tj,ے!r)%nm` AMiAVc.Z^)@>/ą9`=k|0j9/M FZ86%W3lpv]UO{.Bk9QwGOP`8dM*ChQ󨭏]79MײDO/l;eLAx$vZQEQ~kd$a%ե|R+]v?Il !.H $&GPZ ܥ1B̠w★8Z.f[]f=A?wj؈:8ZUTlh$E&#$80Zs6߉apYiH}UͶD5>?ƭ`hE %iUA )X+Q'rK-lDDEw<ؐ/:f[!{ZZo)'gdXdXSwvCQVBPnz2&^meCIxL*$2Z'2JjVՎS\>~` raW/c? ]rRz ~C]}>9yWb5GW,?r$?7EH@?j>>STBP[uήc|stkh|宕z)WJ_S-HjP6?n;'%}dNI?gF7ZolÐ;r< ˝;>v4fus3Bڏn& Gu,u~sty~}L)Mg gHZ/8VO=n^hM3`ߎCpЭ4U;M3Je;M Msju"VJrMtE(p.v+qD:] ]w ~C: ]1NWn8؝U!"ֵsvEpoz!j+F!ҕ4-9 ffuzgQNWHWh׶!B'spj-V3H(p9)3FRgmy7>;#SD3zHLy$TY&ynX4 "8$M4b]'l>j4~Ip_6fZG3 em zGGKL)An.X ]Z-v"ƫNWHWFyM IHc0sBdLք h-چhЉཹhzКNO|:MJW۵^in)[r`3tpZ+BKj+Fim+ ږJ5CW uhze!|;̙uKSAҕvή.VЂtut9hDRfથkMB(pjuj+" QWVQZtj8捐Zo'6jr ht5 }nrFWPʊU te:]=kbv:Y@+׳Nnв$,Q4wzIhҨ)=n8{,#V^+5^{qƪi,E;nUfKDwsxQ$j+Ip5Vk,U5nJkE+i]1`"ZP;]1J2]CtE}C lѵtҕU|KWS7]3tEp"JPڮV/Qj>"| ]\Z?j'NWHW-Oy;tE+(l^$NTA(3xtTRn(ތfղ+FdWCWvǩoѺ:t 4nau5 _$VVV}N+j勺JjhSѤqm40Qʲ`vԠ`Z:4i7k[InДYzfZHP`/*ӊGhݣav=Ie rhUVmM}`0M` fFBӌvPiEhZ돺!"J\+thג7U"n++B:] ]' (2]1\LuG!0J!ҕu*(V5 k'hb3*YtutPlI]`p\t4NWu:DBTz;h:'],fJՏ"nipM3thw:D*~R^B?FẔRaʶM^M3`4V,תiFY۝Wi-!b+PM9L;tp'*ak+F V 2`tu´BWv+i(7zu:rR; bh[+BKLC \A-+%4CW 9bK҉NWHWV!"Fsvpj-Y|t(tz5t;N=^}` KWrwxihq!rCt ;]=4B>& JJ1f5.'{M3&,n\rmCZRA7zl3a0bpJmF34 Z-o^x8pE?˷۞n..uz?~gyׁ>7ߏW|8ɫdq 1:0b"O}N?ӝy쟫Y<֥>~s}z3js.?XV=Z? [[~FNO*ynDXV_W%X>M׳9k_Y߯ިls޵q$288܎؏ꗀx76 vn ![9_!R"%JY2%[~MUUWW8ף^O43͔cPZBxQsW"atZ4*hܪShV:e|1&'X' fC{]N?ɪ淴k2.4s7Yrc$ (TNUX;/VE_~81J23n}2I 3&q} sNa*yQGoEdqL9뼓1͜gM&k#ij UA7\)!) w /Z$Rv&Ĩs̊KОLcMb`w28:E*hY4sxv2Lquwk{Qu|U~nxfQvP{?RRY;&R2Yas7AXmF\kk}iM.@6>PA'6 ?" Ir''ki3Zgg\6v1,~^_t67UiE[y隡j➡j~0wUg1jᴉ~0(NFOUӹ$izq4}),8~$NӳI%΀QU܆&`^Y~Xϱp\}QTkOg 'Ns?SUܶ8YgEZ;N11Ǚt;ZȓmJ@\oN )= M@kV֖YU չ޶rt {zcN3bDh2BzVqA^1cل{^&p|YA[]@J~>!S6}G6j9c2Z&Cd_ZQ"Z;>ޠS^P1wrsi2O-`̤:B̈7Ǖ!*RhQvۚ.< I9sΥ9$<(A+HDN@<1Rɢ(Z3ϲDpvD}1_m hcR(ljP2dzZsԳF~;$5SFPGoT|(ucCvd5,EUVdRHC DِS2zqPl:hS_/&='mĎ虃h `%b$AYIE)%ġ&AV:gE. :E8 ǒ5` c6dfrA$u1PZC[zcCIO҄:9bE*;Y0%(JY5)u֞G6#s^Ӌ&"U>ײ-VPYqrª:3O1b`0@$ZXBe/t4!tўzXGgI=vcQ :-usKeq(J$J2 6y`n`ⶂMy){D@ ǽ*w#= ?zw $b$ܟTacup\k.K8Lg>MҲcZ3,Oё_6=4 (a5LU4f%L[@5eJ;HDJXu'ߩZg*4W*¡ +tKLƥk7lHo_C IL:Hk͙0?t*u!o-6^'ȶ!>n `GpnaZ%РV*; !|*43K<h`.#2@`F+/|fW)J=p5b֚soq|@!* |xݔ%1M9 Nj{o>ݯ6 ۧT[9202̊.h AQ oA99W1)!1y\)o;ʻ /Aw -z,'Zժn2h{s[fn8fDPZͩ'd#Q"tqhX0ix j!ٙMpg[8mLжrҶ򇑿&zU.;46 >.AG'46v'A&AY̒ېg tiAW-5,2 ¹ck38X rqH8GF\vٚn/imwSgӓ=kk>N\z e%:91D$@t)X-x`mݪGU6xqܣoѩ-qBQ،#ZzRYۡ5v ފ^GilpT%*rh cO,fg]^--U,m0hA-r X֠\[_X'-spvU@Ǎ|$ZbQCodYxJ}!hf)"~ 1K4ېl%K:7ĩ HNh-Sv"ₕ`|0 ' ȨSpRr}&y|TjJJӀٍm0ODa Rݔ⢸f /:9#8~,M>:G;SNWd?aYq"+W:DoV ?Nkx.#Pf+}*4~H+)gRvnDC⒉3vh)G~208Li<=זZD$U9}d GNAL߶4WJgVM4nG?~(ii>j6 Si>7ީ'|34w4oY7==TsUz%(uU,zZX8!0My_miqw5EwͅWlKc|՘S?̃ǓJw9kΑj806^$~`lGM Κ@mtW5bg5ƚլ [RڄQ3?Q,V|4\,zrYg?sV vsNuW_Z'ũK)+Ʊ=`5U6zW?J|Cdyarg'$Û~|?݇Wo~~x??Y?L1T1ՃӃ 2תPkjoYGRŀQ/6e ǕmH/d~`2cQZ.&1~\e*-fΓ!'U\Ѵ/WZH.gx)ĊebGvf%OM"ޔgݚi7GEp$)K@Pք-HI8d-S۲HL}c~ǑZw6,;l2 AlE,&ym!9U Y'f r(tZY9SZr}Vv^aᗝ.<[`U4a;VhO#N/ɉ6bxd`\f2O.C|&Lmg! 盹~uDη&rY캼W"㯃u^7z"3hHv2e5fyڪ#(WLPVy^#\Cd*: %B u(u]=jKky$(U 4Z[ W1&2 Is L,h%9+kK Y˄wu,+>CQI{zqPb֡zO͙(F=j,t6}]n#SJ3Y'w\GD^0.Q55* 葴LB_>~2 *3flZlRk I x6˾ZX+k,L?,sH/WN^c<dE$i <%tlny.#W8OJT )yL,+L6e:CJyZٻ6r$4{7maf nAb&dx_[i[Hbͮ&Y_a鯰2kgF֛RR6~x3y%xqHrx<<[~-ݛ`q/F͊"1܁p*+ů'5rIIqsg d m1EBMvCK&'^Lj\ cP&ȭr!ЀR&J- H4*>gm$lk+ج|=OjgCRpo-L7BP<+CF#i{L2tAGDTQWk>Gu_3VfAoȴK^eu_t9[^YZ]u7]HweyV70QCݸR&?%QUj-4:*!%4<2qv8J^L~kJ>@N(C@FK9W^|=_]Ҥ5r[Cn!,N5ԨYi(`>a&i+HI:iB˘^\DŽ +8Yg* xParFƑ>CݽDJd-K>W2:0W4/+i^xeE!vۮ7:%4y'0yҥLǓ̙NO ޒO59oua]xߋJΖ~_&HvFΕV-yS^};`B9OʦDBRh#ݻCMkMvnۅ9[:me\-Rvtyۺ.^AK'i'|_ àsLbo@mo毉a9|rE* aI`7b{L&TF⼦4|?y2q m!m:]Nx: @NNҩ?BUY{:Y}L\.orQLfÜRm5}G&"(hMM`R)<ꡝ'to׾01@d] at]\--6?왳Am#B6:?N8lu;]{u+\׺n3LU8M.b@k، -0l l*B v͚,}.re{5K"kǮWLFҦ@Мl79VuC/i8G-^q4պ{V?07qͻyȻUZ-wJ`r\ Mw 9`1m'ϺհmsC1azKYڜ6>VrvN̊ v7.L&BV^6Ԕtzv{c/Φ~#K^*/Xx 8oIGXLe2] %26XY6Ke-ϗq/z\d/8@P9Fq1{"(u}vNk @u=^nz n @7)';XKwkJA-ST̕Y /⅁nwA7~g_Pb^bʙ('/`B&) uMً/W=]_\}[~h~XKE)~th1XNG8ئZE?~7XTD<X2#z3~=x~A)|߶55>gŽG~ٱI6ŵ|v뷃.[7_]miϾ풊oF?Es"Pڲ]w/үW?&q<*QII8%5$vDn5g%smvE,}>;-W_T7mn \u4=/kax|?mv:JdH7ēnr&mfgJo0 qmxmo;e[Z=<fy!~j9+;mmtMtxۻ:OovqZZI8}GWdɗfse"]psS4 6l4@޹ꖻF,p6^[:4YrvQ>a-y4SQO厱?.x5(֌ZU{dS4(Ŝ6dL HM+NV*y ɯKQY x*H촄$R$^vJ8Q{ .9cҸ $SZ\G氪3Z{VgZMGU:Gqe3PwiO꣣(:{ꧺaD\T=?^sR_&,ahS(|Y&CHjZOSx41sM,$ދQ7ZXoBk"WJL)q@,y|`Xr\#] .T,{YdVZ&Rzi=(#r(BFGN`N-PP9ۡl|:VqdK=~p<]<:H$&Q+uvgٓ+;"`^[9uX*8 <2eF& pK2&͓.6WZ#aZ!9^o4'dt&N :">48Q -&G鱺 I1[GdI Ik^~D1) &B1NqZbmͮ95}`i,av.ɡC@Wۗײ{VJg X\*.\?H4>Fd?x@N-c83s:&P*ZsA'E= ]*؄8*dR%c5rvUj+㹇7pWĄt7we8ƹ4tp:%v6YhbL>s+w:ҪB)8 $:rIV-#]񟌩({B"6(0 h3v,O`JśU-]bx+*HjcOZ 7%g/_ g9 *;il"ckGT5,.% iì \&diQ&KF$!H2.ER %Z%r3Jb$MZ(+ ( ֎nFvIECuV[%EQY.?{Ǎ!p[,mx~V"icozjk@f%Y]M?j֙11ܹ<; '}, nM,vn!:3\{d٪l` ^!fF>7d?)~/uUO|v)q~!>_z;pl W?ڿďף۝+a./Q8&WOX}njՁhfR<~*3L2KrDVuޑ6b_xd7!͈Ð}D:?foJ^V˓Bˠ[6,_z>=։f]  qϙލo&3^ 3߿nnP5=0W, z;[n@F.glJ \`gE1"Ζ+T WRW'+-A VH0@1BRpEj;HVNWF   \\WL0jc}U\ RۂpEu9sW$@)"W3:tpVEyW$bpErA+G BqE*pu`yA+gV Htw lz8gZ vc{WUGfj86Siz]W[7t?RM/%kFouo$n9V,Qy[5ɫ֠[d-V^LuS_?O60g@ KƱ(L>(ry{͗[T<'nn l@t΋ooh꘭vPM<-jo"5(]ɵR3~\L)'5}Do0~~S0n;S'ٺt)/:ym8LBUL_hydgU/Փj<MXW8銹 P¹ZZˍ. (XsY I`B1* OуI[J4F* :E\)cwrpEr j{\Ji\ e BN+\c%@n؀SĕʒP<'\Z{\J;,pWue#CUdU^gGL=vD#G4uYv3^E4̎"m+l,W(:Y P-(w\ѾN :A\Qev&`Ÿ9(](wչR+ⅿMmLwM1= 8ݴ4~tgEy&ǻh|v&NXS)F6VPuNѳXpVw9vj}-e[ӯ/Ǵ9t맯&*o}1N5Ƹsmx5(`5pVh oX/#9l-!ncZ"I~gV*:ұ=FQa|'-~6%!]uv'jkaAV:@vKu培2c_8TJs%ce뿭|q^==o7:ݾgڹѮe ]/B-j꿦^i+.4|,-̪ ݎ2A*WWi5j,^ov.:oFُMp ޾} nl&oW kZOי.Wcf(:[-1waYv'hg=]gV~N5ӊ\GGSNc::cj~g\IwNs-{~Q+dq:_ͿO`o+mn<MNjQV/ӳ<(qbv6SA￝qW]aΪG|'?g_CilE}ʓ;5턛q]Y lQGbW=@8pJߐHrQkޝ #ٙ[w}s=E`$`$*Z#y"YGCHm׍ԇcw#ZYO`LlK`ˊIjB)_/6* C ~IxdqE+W$\Z0}\ Ӧ \`xW$WRpEj;HUNW l(ymrVwEjB"IW+˴]Ppťb+R"fOWYIӮ"B1I޲>E\V v++m)"OQ%p3_9ʩ?L0{}#Ԋ#j3gKn p\mF2NuS_?O60g^ܰ Y@%+ml6cVڹ@5dM|{.hDt1'5:Tl{a4`RtF'j+!/8\h}NX\nr *(d9&LZkiR CsBI( W(X}7m_"+TYgeHe'vJ k)W$7.WV>& :A\i4g bp%g(LegW+#c% [΋U<'WW'++a \`]1"^k`#0"fNWNqgJBd1"`{WR OOW4 \`!H*W޻"WJ)6lձ6kfjF*]nz#yg0˝XM0awxd7M6lK'kN6tP7X'`ؕ[^-w=v0^ vz,;Ԃ;P%wNWe:SermBhtL5;II43l<Mj~`?(跋C{vѨ>Ƣ=]Geҏ@OxQ]:{c ~_]p o'p+,_ՓhƲ< (/n镟oj,6h\_ޤf!Wy##c@ee"Y' &hkLi#:͵] DAq%K,*>J|b8 -!$We/[U\ Ԝ=\ۙ&lnXB`m.C*o UNVXVL8esF 8䜰dHR ,Fp$Az JXNH! Tli)il̪$Q22'U3W>3`DӒh>5l1 h،5hF(&Z"g&k$*pla1ͱ$Ћ[ 5 }-}iAȼpZ(` zʫF/mU`U%!d,Ћ- M 6',(BƋW"}m[$h-+$Ϛ+"*S D 2繒8R M^d*0'Dh>`Yvv{3BϮ.i.=%S4~тU]! ABka$qkB1^U8xi7olDtYhV2MQB2 THr z/W=9yPKPJ$ V 2e^+S+2] x~y{*KH`'׫5Ȥu@u3x$ތw@fpmP8 UZɢ9wVT|L,9貚 D v "}7XQUÍ~:;;|0d"Y,Ð2 ^}4Ŋ6AXB(;Tҧ>mڣث΁6D>G-:$.u`(1 =`Jɽf ۣѧEkPh:'!v, ]赈R(d}Qkh f NZXh3@Ф ;9vwEQYDirL(A.( 8@C8("*p(yX(ZA$#dވA @r"ǢMtr?lF4C;YIC5$,'*`!m@3HKފ>BEx=2俽݊%0lP_χoAKCb)M ֣sym^:_vXT2'ʽjO ((q /'*h JC` l*\f. b!C3c/(ݒqI|q;%}e\go\B#">~`djph7$Q4ϋLWzc/.yW| nԦ"q,X#P/_܉7Wg7u+ u޽yrB~A +g?ѷ\BZD/ߟg_x_N߾Oeͽ.ԯC3t}:5t.F3t! /sTP /h&\)ZɟVOB #̟JX O6tjݏ;x4exB._mMG"m:Lu.^ wA_}X7o-gNB*|Ҩ"hyǷ[YuC4 ΙfhfdzNӄrki1!`+u3tEp5<ms+FZ#+-M"rg]Zm (LWGHW K]\k[+B (d:F-o ]\Z+B?<&b\l؛v79E%] kj'Νe*ZEh"-8ig!U]ZNW;φޜSS[MWx`7$Dt5[󁳠+3 գi'Y1 o^6 7fSa4|\;F~]/%:.Җoφ vni&;TCVwU3YĮ}EW}yɱx+sߣWޚ.y E!) w%N SAtZ3).2ttSJVsg/3}*XYZw}E3UEɛ\ӻ#JL.%Uڴ2XS Ѓ~JB1Mg9W"K6KGYowzM'A 4G#ike+4Mh;Mʹ=Qf~=k: v:NWR#+'Wh]3tEpxZj]ΝC%ҕj noF hNWk!:Frދ!&+b+tEh;]JǏ| mkfy1JQUچ`Fh ],Z+g/r-㡫s?"&6CWVІWW2|Fte\"bi>Wi-^[ή9 4! T{5LWLぷE63:%rخ(iVH ]ZNW*#+EղQf-XhU `Ki$NVhz5w&sa~6Bh+B3tFV*;w"c+#UCtV3gGpM3"h=]J'h+vF5CWh60w"A3]!]y)h;]?V={ODAT]`]F1vB):FKT;\%b+tEhכJO?rzw)h$󬇡ipͻ7˙rLWz͜ 4a^ @ ͦ1[I-4 4˭2WT6%b0##RmdCGь\#d+*5wEC(gEsu(niҺHhnV 4&!BJqESPmдR4UUਚ+vLCPg+MUWL;mZ-g/ LWGHWV{[+v5CWW73gGhW!ʨƄ =?yZgg/ e๫c+omCtR"ѶBW@PZ~"zts%1޵dfځ6(3w"1]!]Ep v" hTd+')h\=oWq)9=6'}>?'@=NBiOȟc^:OWԾhJ tt@RKO!HivK&KlC4 ZfhfJBkg_UJ4}4iphGnGZNWx#+3-wBBW@?g래lj'ֺ1Hplms+>Rxg$f pF5G 28#+bljZPv U"&̝Ndz QKT;+ᛡ+֨j+!+v# m}u^ φ>j~Kh԰5k<3lo9ǃgAccgZoPkݼ㻨oWe WQuŗoߜzx]{\߁)8 ۋU__?:;WoOu[wzw/'O>_5W݅Xv~:Iz8,Pxwѧ܁޽ 6L@GޟcQz;7nXoW=Y}8B7]ٻ6$bq;b@s\'X a$L Iv߯z)qFlj^P'")|6A_&CHh*JgYr }:\8#A"Ld}fnӟZވPd~)tⷞ畡X {|i4חzz']Q ,O#8-M%5XҦ';]+Zٽ5M.ȿ>ܾ\~|=02{>#ywWX,q= ǟ**EU->ۑJ689A* EPA JN`>LZhRpCjC{gGAK/9$2܉n sezv=gq?~Mhk`IG}7Ylai[”>bNjd]kV(->l'UDʥu4.Tp8}. ?|଒}`M_qV Ϸk(ލ:} ? ;M&ي}Edѫ0.RgߔA%O((ӆ)L.TˈCdSqRRU@*4[&l3yL8 {Do(@YYg[ ө7׶A)Im͙a@ J Ga!:kEZtW3!J B[="%, \T FPhWv<'ǒj?O,ƛA[[V){k`MVH2"j4M4`}X_hv3-hnbT_.m^c ęĬtG Rh&̌O6 d&.P~dkYe^*Ţ 2z8,8O1cdγD $HBӲ&} ~U wrL0e@2V"0Y/)`#77K30S7ۖl:f/h4G_KIg|tdLp Ɇ&6g!G diDk(XE*@ 6[<@m &~])~*ZpKL;8ҵW=] xΆ~E?/4nuGmDm] %0wSx =M`3WKO\'O9״Z~R1pЭ> ik=u'>8#4Rd tkft`|ۊ aM)0gs1NRL+aVkNd*>&cmti*`kH;(%cN+P9N E+mؖ8SC(z&2$yCLe+Ņn\4zﯨLZi3Oȁ {fE[˂D5nx(;mr^wv$C`DJ9$<(њPrڙ&& N3jU;,{i1NLl2pZǤQd AD ;ٲ&ߏ//]OߤtX^ZѬ됳F{%VRꬭv>C碢=?kME5>ײu!ZgZǪ?Y|^&0쯈TujUdʹHr ڸ'Z8< _!Xgқ;0rYWt6TqHz[״2,Oё'y^#S!c<ˀV _ ΕJIbBdB{hV;Kl ]}? q+s\SLM >q[P2}$D'4ȵ{M~l~|L?N0V=/|YE"}Umj v-TL3q* A*#1"-Ciu1¬$]3P) Rؤ?t:uP!PjkAˑ[7BD ؏血  Z,y4$!rM@U%_@2nUSR3K<, K*-^(EK,kMۃege]JpKgjw˙7 c㜗M7~fxA(kiZ1em1jAiǕGuLo`(C$c+lwن En vΠ%:t+A;Lgsx戕ETtaYLgg1ݻ50Uye*e9f eIeiQsGԊ,`!gJ!ۘbI'tU@ e'ϛ]Vcؕ9}ox,/liҹ׮-vҦPrkcewaYzmhc6qMe-5Lh'MU j\7Xn.K vQY$*S::[dh.`NjG!}qq] z7X֐sv",3U;̀1{)EcvS'Qc;"̔S!(0%* J㐃*") >ŀ3v H1cs/1Fc*Srm4蕕Hh/ě6\LCaEWg=ޯ˯M]Wbc/xs׼J '̠ӿ~I~*SJM. %v`OM_G㴌~H,/1yI[N8(FLHe<9 ]"#c<u FfkG\)݈>Ӹ:.:yhNE|kXGsN.Q?ֿ꫽`_?.=֎$ީ5z̉Ftm<1g.aTPmoOҽeZ_Տ+^/^o=8\U^ s?&y=t1 #H˻77'HF= ۧDK-e0ʺ*)ewr>lRݖm"mzWvkծKweNm<=dpۓmG>aT v>eիUK^J娴ާ} KgT=\\tpUZDzQb={NH,I;(uYofW)bLepI%$Ιke6h:ֺq/ffݣ=R7P[?OJArڹ~,n5l3|$hPokt(6gO\eV%d< UAT,`" u9 nSfLRfEyFRMΌsH= K݃ty.#=XjͅG<&6[&l")Y;Y`[Xcmg mM\Bǔ=U]՜4ġeʕjN=|dD~LleIٲ@ʺ֡bRm^']!'XY1Zb54l#wHc.=÷Kpnq.;X,r#${b[kh5n 5Twbhl$tD\'x"Ԩ1z5jZU9JQ"Yb 5Ѵx&RFL~hM9:P%Km9V+u|J寓(!IZt=`u|z;jz@_ldsF wl(Eb %X [8#$la] y 20 m"b댮VksU[FlW/5KSX`mɤ,©,`SXJ ckLY`KMYl-wpY:FĐLGZ8]p]hm$Fd):JWQ'YHl,\DSq&y)K+ΠNXѶn'qՕb_::rYv_u<,3Ĩm6.>3ʔI j "਀MpPE-͉/jxE?:lcݣ|DMꁳ[zۿ?S ,-ei;LT&bI)D M54jheZ0hH6"AQy.xQYH u@%ʠ|SJTC(DB>&f4P H& dZfm,Ҙ.6g?]\W v78;f׼ZϾY܍-*sS$1#_}! 59_bIRk܋Zjoz;PUdVbI;cuHD6QPTu#U(U{""PpNp$%2R avFLJQ0I[S]{8fe5b>C诀VYb;\tu_ye#|\~N>FBzT8 >v˒ Tz2R< J1i"POH/w^)K{zYudRA1KѪS\  Ej+TYUʶ)t9ǜ-G/C΄Fjd.,UDH]HƂ֡e3qt4\g))b,LVpo.9Y@`7偛<]` ӿ4dfޝu k /ϳ /:].b T .N=U7xՇ֣חϾ~s6t6QE oLA?>~&I&dglX(٭%(xN-Lre,B~);&Js ]<<^-p)bDN˸c&-keݴŘÊJq'/6 2MˠӘ]_}v۫euܶV}{fs*i&Ne[nM'O8U_7-+m_o}s SEPt5/]guNTlQޯ£;:[}y~Şތиi\;fx]va~va>|Ȝg@>-/˅n#1Kn o篇pw?>mkmDpÌ4kR\GxTT\蠻dj7y6EɣinGՀ|FQ=۶Ix-ϰf BPIg Qb3rR&ES"AL69cΓdMWdkqbl,.4,)?Ut*?2|U 胓)Dc)k=bc]tw\T!zeD'A:Wx^H-U J"FSR T@%@wj@ِMm#^oy)0wnwWqL> BbO.QuWnpjxW^x^SJ!E R"smDֲK> W4BVkChc$m:faTB9)!-(Q"9ͷ 2 U6 FiU'Eﴮ!p}a8yD*Sl0ӈֳڠ**8> E FQic}T-9آġ}B=&_uB6K-@*jKUWX˃0e :r)1hCJP6FCdw6l`"z1!ӑ!Jnj~/~]j<1דkP)^~li7Ue*ϫ~zG>N.P}}cS%x#~?_|5K=L/쯿۵Run7c/=?Jtr'>hd*^Ciy݃7/ߗQYZY,s;^?o )ߺuEoft^yܶ _$UpyQ;UwmVn^Rtq]EgtPG Y>yEyw, &|o2:NcU}tW,MӸ IǢE_x)5jCQ$Ax1 1C90Hh5D4؀tlSGKګݺ6uUagи{BZ!/&]qOQ|MS5j[D l*QBP"bz$$GMYb{)0H>qNvF!4VvYV=6 49wiH}*] #{]G+ ;"'sCiqݛv}W}Ȏ^R>qHU9^:!͹kz$bɹw^e/+Q;zRÀva'lPv2R`rHX$S&BPcTAE`V$6wwn֓ݬn d)G!/L_GA>dd&F>G JE2PP2)@L@{>mIZڦZe#\5-JPBʌ?(GdYI,7":'|jmٵg3-l %afoyRꛢG NE$^kMVd<@B<1̬Wjht ֻBTȋb'J@E1/DJ`baj9 jWifƩgcXxxWוoIa[<"?.^\~,r.(`t E:]WγU#kGKL!I!’NC}L{ЈJ ںd ZCNrm;CQ&d0"ݦh_ُ8҆9n6;D֍-P #j+YYݭ,2HM^Fؤ#!J@PtT<ٵ^i XC`f( [IgXCY$2qdFuv16fި_WШ8Dl6?ED刈#"ca&ÓA#E%QJOz([!l.z!ŦXaa82&gT8ɛlY?>%q I SfG3jd\\kl1.G\9㝷sX=REvň*)EQ@ G\| \ nG|SvYķ_#zOb Z-(Ǘ~ɬMPe+V2֞Je*m0ClRJVAzsBp+'WU\8::\U)•S =!b.?j8>5~9\U)•WE8!p2pUT bJbpz* )U\Uq;biC*FN>ZG eQZBqRgz8_!r~صE>\%)KR|S5XiR683_uUUS]Ru+jg+vR7KGE6_1ݨ8j:_{QK7Pu{^nHZ) P>̼lq~5f h̲[ۅ)R{EA^WTJRD'֊*1)҆тEK-I@X}.p% "Z|B=Xjh\KKҳ)'VzJQ0@ZADO~כͥL(|G s@Jw[`ɂih(4hi@#M Ms "BBWhn(tut%RDWزp5wBS+) < BFCWد2}2(9tut`B2@"]Y%wsF+E(thqЕ T/V; +\ga ^ZJ̄r!mr]|R}` gA.zoS7.}aY 0XOMF*UU),JDNl&؄e4hYV$;ȕfyI8Y"RUˆiUL Z8`,2R %E2N^{XݦBKgPIX-ERYZ&RB8G^/vk^솶F%3G12Ag4͵1ET3r޴ne$BNW0Y?gBjpf6Hms[M@4 c44wF*)47s]!`+++ -S1 >EFҚ$BBW{#Jϊ"] ]IK6CWW "Z}t(-eNɀ ++T(thoDeԫHWCWjkH@tpy87BJNWRHW'HWPal@t_?sUGV0 QJzzF0oX;rq 9RJYo=AFYVr_hyJ tkpt-IfMD?a('W6=JSgFKK ^ &h+NpB wF,9HB IL@tLBWV QU"]Im8 %$BEdD)t+ȁ9c2RNBWq Q]"]AT̓ZB` j;]!JNIv;=qZǶY5A,dXs} lsl)@6 J}t(Y^]gwS@WN 9v27Hn(}kVGYoq J Hku(:ăI9ST?)`'mu 4*+u5<x#J#]"])=xP])fe0S1+@ɭtut ݽPL0?@Fx<ݻ~?Nu:{$7r`= O[%Urʊ/{ t`{ȥX5-a#j6o SZLtҫ87|1.MY[(/<=˧}*)tatǾUO nK%x/[qǿ(n8Oqur}?|Ep y7B|G})p/ \ $ ,,/I>ڿ >]?~y7LP.dW4{o,]iP>1?4D,M>Im_wƋn4#j0۽7L*$դ57r94Jĕd4Ll>rar͆VKmzKחt5uߦå1g-%1FSgY{$jQ@Sf@Z4&bhcSF0J75ljl3Q|xGAUBg3foQ5|HKmϓ\z,F*5Տ*0ab )g[YRS.=DPunCk=fЙM?w0pC\WJx8;M|0ܖ=VOi n>ӫ||{;5)i_P}ߡ, /x_|nC-W':Zh(/9z|3gS2`VO*ġ1?2լ́D6+bƕZk-P=EjΟ{B,03 E1*~ўa2ɳoJs_^zLYcg;e[n&ѭfa-_8 Y]cƶ W3-@ˌ# x кpƍ;jp\:[1T쒂t0ll|:boO_&.6x6BR1@Qi6,~K'uN :Zܠޛ# b MІm4dYV,*URmsZ<\޿P EbT F>n}tPo fc( k33{_ޠồ\ƟGIk+خH& >\>lX Gܽu)DoReR;[ Ɉ(wJ`C4Ɂ]V1HG|o7e)i2'RBfE.t2K){t}LV0>ʪ3IIE$,tiv7_EySB肚,mҪB(_ʪ̪RYc97En+}S]V%eFVޡ7Ygh{GS&]c8_mR޺敬Ҡ ax: n s,PܼPʎ%1P#v-1h 9 %WʹK^b$GN֫brfydh,Au7S(9Xi2T(%ny*jtyģE6;hB32f|7, $Xկ5],C6F1S^9-}Dl-ѲveN5p2_oL u!9_3Cl,TNos4zaAi.$peT3H9x%qY)AGoϚWZP{}=̶l:ifZ޳[H>+;ІBu[Ym*Ctݭv\]KyZ$VZeU_!2e!z'MU?EJ#*EQIQȊ2+ru<_hpR'~Wժc^\.xojOGu i:%U>MUdxPl޶7DmU1Pf%L'yV Ȇ-ORM˄&*Z\~Z]P#P"3BZ&%^gbn(̪H!(Fm\GfILG\ԴE,/]6oNI$:e(1\U@a0° ,V`jYWRzÉlܴLTF-Hڑhτw1S\nH`Yw:$Zg}휄p7,&aTG2 9if9)Y'#`K7o9<8Q5|U,p*D%)MYќ \%"{*y8 xd^-Į- d-\X,Of2]LXN*HYɲ,RSEmV$Y^*E3uާ-N#yлAFL,ϲ̔98㙡q CT=n#9RX$Fu?,=A!/]%G뗙eb2)`Tt$#w0"(YL ";f;R:xjA X*N搼ckϫDdMm_\db)m/I6HCjl,t3MC1UsRGŰh|Bo ^$π"XI2%u 3e/A-d֔mUY&ŧ/ڵ̬q[6|9d8+bod$Leq9W3˧XrOR8Y"oc %pJk.Ȑ> ,7 pb Dnu߰߿^XKN``˅}NS _w=hd?_ A(HᗿZVc8c2Oxbw=Rxu'3(٭=|^I_.MEٗ$L[sFc K /}ZLʅyi\ЗR`+AZ5 oö@ g~YEwc5nLU",ւ!e43-PƩ >Rծݎ{MxV)朽7#LR^5liV:b?TaP~˅]2+VyO : Qٞ{Jꅑ?{UժszPHiTbY9p8 'o4Z&GǤOF$$8 !r qc>m(*>՞V6IV ޛUvf8O2ytx] B%IvGEvPZ|oFy ewd?1M  Zy}C, }_FͲr0G!ݤU=#ЅZԃ *>T(~eVU xm> ȣҪ`y˧i}IR4~A"9Rh!L.(h:\DR1YZ=] 4D *c5dSPMcv{qاk*Yͼl2+z c ?8` BCX akN|r~dj0}uYU˲РqLѦvj( lͩhhl[k i@DB(L㼜jPJLkД4$Q!I:-,Eɧ/ ae(lP/=΍ Kn>y/n D>ߌ/>L&\HB7T$O ą0F[Ok/yX.vY|dEDjIQ0~*$Zfh="-JhB#Ѣ3Mg'B)y{ vVq;"dRksd2-j{樜FsO,\Gjuj 5k:ۏ?"AngQנSZ|:as|扐޿U>,L4]4FeȞgnٳƩKH.XS|N+x˥B Y UrLGh+@ $% tte*:,&Zf˃.h0NZBRHnES% ,\p e _}~X\k3ɕ z.[FҀ Ȉj$-)@@Z?QMpAsk2S4xJ<4.qK,om@ jt7VM%[8Hmz=} B.\iBDVfUX;]STj5jePѵdyMK5W`}g 3mv4mSeiLUGA-ˡ Bő=#46=ҝtJRאdC6:^v}Za*2Hϳ aV| @N:w@ӾO!)@&44le};$I+k7i 1MnɍXi#{T+1CWQ僋IynRHRMjM 4%6e67i4ep#># Fg`7~3Zl-ZWUAacnY [.M8W5o:ᶪ "U@rn&!r;d= " o^Wy2T$Z|GZt p8^p|sq^t@ePcY,:i+-0x)h2HF;T1`dPS%qsA/;I)i5YD6??kФY#Ҧa9F%G@eYHY#6£j+T,\OŘf-H, @4VMNhoǃSw^ },̇Pܻa<8 '7V`u8pT+ iˤ*tMѤ+"|-6lJ2C1@/GF'ijcz| LX>i~Y>h-\ϲ4 HڛVӧ It>LsX 5@~ԭEgy1 7UJWG$.X ,O9AGo1iO`EVl𨊐TT#mՠUzljz58mp!\nV=EdCW#FXBQFb]PQGINQ}YW{>=LVƉv1WIa`Ҕ>,/ a$tdZuqTG1,9$ILrVNjɕJ ,dD ywkWd!S5AM((7^.JlWo 2#HfTyZkp&Z6BϦ'4tmr<8^Z @2!5kQVs5L\OÔ(eL^5DkmCY4u1Ѵ[ߵ!z|ICtN7Wsڎ -etU+7[%&wcG%ഩj?gR[7W P_! =CG"Rxp%5שi*@Eë>uȂ,OB!5d(I6ddYcSW2Z+4,:,&"j 5eo BG-_wńʙKO)1(V*_~ Nk>bI]w]u}ucO]])34ITy8O(6{(>ԷF$”P\lJoWhXIA`Rf>|[ gwωGlJ,5>^"X׵á/>PE7]һza7erswW!o0Ġ lj傠D aՐ4eq+wւ jubZ?tXaW0C!U1J?&յ^:n?`K -vQɠIS2)^,x` wӳXzpmX5H.RFr(CBۅ6ޏNR`a0G+xoD5`G;՜{PcL[5,S&%OL2Mqfc+`xO\,t!9/hp-u]o]ٽ֥+ a[mңܺc?Rxf#8qf)P*}SlpMG,yַ唫ڳLZh*@ͻUMGA'bN'Hή.:lZt8vݥH6Cg„C5hhr?,3kX!n?D#SPN'{ cϞG3Ss5NͳδÛQsq6#\x*2Wӥe>dI#Eiʁ(l.5܎IOc1&*!aNN=6OҸMɘ[xn1w ?P,cǜ-Oq#}:(ꐷNAbʳ<[9߉'َ[-ؾE>R1-p]zdVRDR69H+F")oΒ.xͬGq͐F!DٻqlW.p^v׽$Jرi;cK6iJ# @ xD}0ܰf)q`)uW>ƫeOE\^e)GYb!mIsKO2W-oϷ\P},辵c^>Lg*UcMϻg]yt>[&V^6YОw%$5?$@(NZf+(ޥؒY'K|}|?O#aZ_+۶{TNZ2~-ey~z;DeԖL1[F !3# `HSxxQK>}UrG[lbV[ʦ]v v:g ²(f<;cgt<<ǚbV(ޅ</f+ ?b'6h" Ȑ"ŗO3%Hb0sﵷ8c6r^"4աϷba~ڷ- v~6_|(]SB8mڗr5dEyk*J : (]{~7ڹk`y߆aK "H8γю7hERXj嚘$[M햛oI)^qܭ9F+fSMչ9B>%U-'}Top0NtJщy:W:yloEWAMY+5ssUE"UrIp*( f?.yM5&3TP%8wS68kQ]]R G/5%B*[4񾙡7ߍҜ2!0C>QOh3ҩE !R4mslͺOo<Hn7v_QeTSNȹșRZ<}ڧ1NڼjR()$8F)P-ĜtńM:JT;@9VocjB BRa`XZ7ZqN.V?uR*IB x%v+E|Xʁoe~7=M*_=NP2JXeܔ9*1[xG=CkRWYHBZbwbQ\e9 _2 rHVEYOU :::e(}E/%7Z0v\ yLATn bu3 AG'׈-?E*/d Ǔ3 /f&k8&x Q6nMcJX bvk=s Y³Q@Ʉ qؐ1̛a06ĸIN9%EUe`'JGSNX;iJ8蕂O9)2)T`PVy%3ze`&1IQw<#׃df; Nŷ L!:_'Sv8;剓 2pKBHPa#TGF)= 5}y~Skf뙆id)ne̦;| ԊdxG0,/$5T*& Y8/i?uWҢAƠ3II!,d 0aŴ(zf[CKN9 4e84h(9ztcg:buڼ^ &%RCWm˅z(ɤ9*a^2sQxd|^|"kKOmL?|wױk1S(;#s'q*lR8"Ʀs@sVca%.@"*zfldkS4eU0B3v2-21Ͳ* F>W{z9VBB cfjZ+TJ(6VtoAPHB@߲y0:+E`1U_i9!*/cE͟֊h 2cj"# \5@VacdXDV'FG\Q`ru^ 0߽=4F9`YN9P%@Ita&a a+cn`Ʀ=>2'+=a)}M\?AՋ%YwHơgeV9$j̴ %egJxt=iqpmnNJ,)+j)m&+m+ezZˀUi-u STUE. 2ʴe?9F=C(/8,U"CPJVڜ?7'dRM?&^cweƻo*7V[kֿ:3_ty7 ]A'~'J9ϑAQ(`lLIB`n dl&b~'CN-O?ӿY}'YjUzHiQ(psR*U#.uRS,bb8x勛:/VM 9J:PBF`FI{JCޘV~9g?LaS5E>Ԟ8^5Ċ]%=ȋ0hbtk8hr}^ 2K0`SrT=ɠ!ŴW3d&/\sF0y$7ߠ*vef/~eBn\y;oκ=TR8&oyHC=vQ!hjhHs$#"mzdm>XF[_ZЇ&C;ӛ:ލn5ثڃxxeJ!wyj~ZSxnjuPsAcA0 =N,vN巜r&cg2Jr-qRa`,"$t[Kl:9_i3rL*QCQ/)—]|EC5fD0*a\Ird G1{kew=C--LMt-"WPV(-IeÌQѢ ,me*kx1n$EyDZD 9a3 a*Iaj]frOn7)|!lm?sE7'f>GQVשƁV28WRg9 K E#'| *XF+'d8'"9QA˭;JkOb]4d?plE_  \Z|:a?L*˜40' 2$`>ɞNq2L82"4GD ADN qoĽ@ʱm뜪­yg$P ՘PkaVAȞMә%=$Dad}㘸X90v1Sc4]=[bJ֊&gV׏ mtteGcPoD*G4kbsD\Jk |{{R6F{ax#>Q1  f`,qWE X˧Ji4$@AaI=j^禣6Ĭ²x{^޶ ^AcQ>佞;u"d ӋTR̰5:ht6/Nj gϕbEW4kv?r[Yfcfr:_m^H\A0{Aylo3_C0~CS~;<;ͫȝ3+ V#'Ph{m[xPs FuT?⠩jn"Esy>pˍopS "Ar^ 2 P.;ox2tR:oe~7=^cO /P&9U~B6ϊG i}UaիwTiQqQڟ{w3nvPAZ P Έ(5 /WH`ǐ)yQxs"YM3y3݉SO9_eoQ}˧琖߿AƠIҭ+I֑$ 5G^ P; Yk CkcdXƝxdKaEEcܑ~# aiqH͂k,NTͱO`ILYm~낶vA 3qi*g 0.KV1e?_o4Pʘ_>?7O!ջnJB9%kc$\S<+rgC4n6F9-AvQD>{syd).`= .}流 t B"?[^; LTnIne6H.,#_l+2ؗZ̦fM#$ssHngc([ &OQJ[aE\͖y^mbB/?'/O@*^+S^V)5̞t>{_dFL^<~|VMW|eȬ}3(UvbZ7ҿNE|w]$]]QN忢K. w.özbr6M PTgd[g?ׯ?|GPL`G ʧSN}mpDvaiƇQ M9PIg6xV,V/*9@?@зޥouli*7kM|T_ѷ]=L+1BÍ:'e]"ãd-8+pIQ=Y8k쌩gV9Bd"B7Y ̤u:0Ȣ.m8b"iqE;)'Zn{ǡ摮v* 6rhGV |N/eHU"p-즑x?kRஜ8ȢCSБ]b ^gF>: K15GοvIRFnE;l$[l+/(Xs|I gzٞ{ _ͩ:C׶{eE.O䲱|KaooUaU *z߮9hkCUdx q,]ڈsa.djkO[ݚxd"Rxx%Y>P8y42Oߊd̞#ؕv!|ݪCs8]սk9r$YgI{H+쑝>A2,lvlC`}Jxk0=x6&E%ab-u֒ߏND V,(go$D6mHiR:y`FZa0fčc`M@X[ߴ܍k,`,.rlǬL4G#"9܋=p]`aNy8USxԲm/Oy):Uţ$c5QRd+qv==_7p;~`RÜJ'J`W NZ ޲&+[.nȡw;j`&W6=;>x#N6DkXݹkN%0ξU V`{wGSR $%mPTyNԅ{fRc8b:*.PI)C+jv+\)|idзH9K:L~7(zf~BAN<&OےFL Q˘%F(ٌΕ]J#_pמ(46oED^x] [!ꪣ܄g&t"|/)阎}9HZXP&R5f Ӝ55NJgdl<&/ό$ :!5o!;ʜٺg-H~YSMf6gp-p 5Kz}vK ޒx2PX8ˇKvLԇu͐YѓR˟y\J`R.goWĨ^OEGYb=sߛs. KY0NZ+E5R3>LE5cݣN]!JFRFx檙4㩄Mip[Y! .]Z-Ѣid0{ڿޣƅ%[ Q#L/SM͛/+mYAF%ԨH!Rśnq]FEإ yLQƛv-\B-̥:S,7ҼxlCspw򎢌 ӸF7Cc|A7gBszZma+8deJ^8:JL/..Gv,D3B;x cU-]A;_]_mZo$:oǙ迣Qپ7 }KcݭoEDarX2 ʻyl .%ܘS 8FBk^Q\P]q d4<]&e5Hup%rm(pvqA n4.orgpӎˀfp &5j1c$XjE _$i:i2¯7GD7c7[w:yW q]:Ɖ~޶_t*&EȗFPIѦ G2˔^]C)ѪXNԳ9DՓ9ޭYx8zkAq;.Cm:}N`_s(u뎿GPuȮx0$= Ņn:2hyLehnPہu's  %{(|R=̋na{E5#Zۑ#1JaEN#rj**H,n ϑ~$ԴC?YZcNɬsihXsBQ[x/#6B"ZmØ0`n۱Α8Ɯ'㭒_ }߆+a)KF(mV Ȑ1l*yLGRIS*OGǡV[*=P)1W؜Nk4:@ޒQI-7+A@A r&q*>`k`SQ"%?!9q"F;zc) 0H}c(Ct̵@cs&=`cs?;_G7b+NY{+X_P5 ?}(kl{}2Cd*euLL(1(ʒq!Ė,9F%0Z"k9XSJ T>*.pկxDcAE܁a Ʒ@ '$p7>_ Ǡ6EpՓŚKרK5s3餏Yg:,HO)x7N=:3w{'.l2a OM͵";%SX1'uK\90&w^9~6lB|?l2|'S4>;~rbS. NBi7#I Wv&e{_ 3Ҭ| DA1 d^m8h¾%1<ỻߠIѶc?xBL(t]fTn316Dx9MI,ii<KX1nCfnnk]7Zu1sJLaJ/S;"Ap?/:s-6qbL~n m{KvnA?8L JN-*G_.XLt>|6|vJatzu`U`a>gj-vѕ=c~3x4ku\_:@!CL]K(Kw{D q;_~?0@0Tv>Ώ9X{l:lMgעX `N\eO\5ps ky҆paB%}z~w~h̿s@ɨ2Vj NS" R1tUL2Xgͅ._m AqnՉ_*)W/TLV,,z_CbFD1 ƎNwD5Ê1aC>VTV T/b+yd\2 S3VΕrƃm`{~ uBnbV:~<9u!1AvW[1O1’hčȰT#Lv+AfHKal|ch}8(jR c-eTiF4y+>:yys G8ø<Ҕ7GCg{kllD 5*¡~`LCunū8۠=%%(FEM|i![Ck2FO_9]s!{SSƨnZq[~K=6iɝ.0+)BflIu|y$Jz 6x!U-P Fer*WpL UoGOB='$Õ!^D2Ǘ/K EP6v?)PދP}ʓ6! OՈT'0 HYWGEǞ#y&\Y⽁lo\ahuQ5'xbh2 Uݣi%O7)L k|~q* 74+Y d\qC(*b%3㔚K#0(OtOԨobIܰwNsE:yD4 coaIzig W>1y;X5dlT&[3፝L;C}68Yd-&~8 ubWO/YhOYp=K 8thR=p]#X%Xvro}CN&0zT-!z;.cb6NU7j:X]Y}E *Hs i/\"w^H(kIU++7s7D7:H5CLwtU2XcHAY.PN>&xc/l )i&*i}$؜`z!tӍz.V^b0NuwMG^<[Mz4^`^xUt<mv` A Xw?T5ǜvX®%撶pۅv=y!vniv~O{F@:cH;\$2CyrB๶%0VJq6[8N@ z"L#u S7@]WA}H9w O(29׾;q$2 q *gGZq#%Q Уh'9Jm.bp[)u9$1GӁcNF'SxB,2;R-)gTc۬cumBWR{\@ ̃HWJɂ4LVϣܓxN{$*ARX9QN1Jh\pخ_X^:|΅H9Ã|-vB@JyoCc8!(1&eWS0%۝1dn}spy0`ڭQ溇أ"|B{~P$Z$F"Aq˸ōcd4#iQ@%0Vc< tpS o(6a/.*۸1#Qȼs` _˒fpk|L/=PwtPp51XVmbYOHW*۵0vZ%k!Q Ar_T 6= &a4cNgB ZU]g .x@N Cf yʎJ`l&Qd )b+=nAv Jqh m3ɤ6Ҵ$+7945p|xh8N; P3 H“\?*P os=oyb2 '#0#H!=azKsL3@F+]R.[0ߌS9y^rmd/ϼv Rp2E;tԵIU9ㅉ|jUqri򾋚 8yQr~Ff8&|_0H!k~virhC'm t;˟ rä^1<>q/+,6bj $q4XGJaIFyΉȒL0N%eJ<ͨ86PieW{3qvYmx yM'8mov4x8\6Z^}N:x}֕ƺ?bft3 Akt U8hg%UeC >9 *SU T`5soꪅKp8 S9H3-@(Hc!%2c䄲 u/kި,4Wz u'(VBDOUjjLXє9k%g. q L;Bh뫭gٻ -J;w`>405c 6euD ^R {9m 0nわKa8G;3[t{@+lew]$6f_߿;NJ72PxRGط 2nWapQB)@jv&ڻ\<( :Uth, h8?>S [_6 !DXQQ[kw] Zjgbl)D:>"!i MNWkƌdkP&Ea ZzEcEmg+w~j@^RENP]vR)mK-8쒆q* m\6rЕq@/lf0Wj_ûsaF~K5jx;bS@?ah+WC b0H{W:=7R 3o|wRwqw-Oɘp(`=S]Sa=uUf; xy w7t2oњBc㻓YLQk4 el mHs"p#\\Ͷ=tQQLdoKkγSg-Be*CEuk}Ҏ=26f1DwB{0#hCHKqreP"XG ~"WCyk}m{S1e0ױ $eJzZܵ?w(M;Pws}}3Y}!?oɯEda+ߵZvݬ$ab8EU{648j[ ୵`E[2'KG7]':<{i~u 9df+LE9pUգln>Z_G>jK=$`È* iWrVrC**@Jb|MV}GgDP*m5{R{L5fg9[5h_c2tD{BNtyg m?4Ҙ U[n@)[ TN FHo1;ܰR_+OyQ-E];.ڿM78ڵMo#7鰶ow?)'Äުgv~Z@mGyѢȥb$Vq%ljOAFAk{'sBrk}Gn^ /Zm.ꚃGsS=WµhoۡK'\nI7oɧH;dC7 1\b祏~qݳy2Fb=]NTH5)n' I0^ft\wܙt!8X ')jBqt2CB~`b(aWs߼"@FE0 ccؤ $7͗/\nj ?W,uճwrWu~To&TFܽ]q|,F3ga,/Vo8| ȳ?c￿?~gE!Kz'e^Çq&0fŹy uejB՜^ӏnsRlߞU=ۧ պZ|a6W~yP/t6u=8SdL:o: gg ~輋#CFm TIqk '4 1kآҦ&ծ -$+Ú2tl1>); 1Mxܹ}/ <%ps$r ~+ih' n>t#w\K:HmU#IVn02Ԋ)'+f:~ _8!,]R:GT ہyTJKB6 Q3RRDMTBeQ=T)Q2jˋfnM^X U7}TI|jzԯFeX7ᛧ <7%8 _8Q[D F@L_K70?Ms"־tqHL'4K㌇3ځ NF[=W 5 O H2mr%l=3|ňଭDvZuHɼm]8@6S?_nJԅ)|$UY(,HSHU&Ԋ {.v&qŸ`ćLϽ' kX JZ9=[J'U4!^C&8.H-нg߂Tܲ~ؤ{0#Av* TQw[pky]7a8@"FXʡXR#P>*{2W(gZAnpTv`P~(dzo(Ϊf@xE$b2m?Xqȣ3K/qR97+#\\:K B޷.9NJ!Dd:RxӾTB:)1.~J:2td<0>o!%Ǝj20J2. )37Ilu Y1m b(2fSr6٧o+^H(*\d\\C!#>˅AUv6Z8j\N\qn5K@\h v@\@`幖\r=nįD)%Y̺H9c`ٝaT|cET<ò7ͷ L_>0czԯLzU T4{)i_}/-67}U%NT>WM'np+ HAh=^{>"e1%b{e.2 %(cr= %}_>,V{i{ FӳnEqÅg5}B7 ȞS֢D0,w.Qz;9tw/Ud}5iLyg}wjb0;9~2KGyglB<[Yg)+YgSj]-xC|ay&9{2f$չcru1!MKQ.B׽¸\7Ch+-ҲHִ60D PY$*-[ek}Lܜ& s!N6A `IZR#Cf0+[Қ5~m*n'_o07Tk*nC!BK٦m0} <ƻZ?Gf[gq x Ч'$hETAY-Gv?ܸVS@OiR`X)m<(m7OZw<38CKi67%06/z:ǟWdW0*ĖT,cn"T[ԒKf"CFF\ll\d$vvHK`&y:8C]D[ ,|yεz0GAg(`b>4Ň|طƄEc Y?ٟ]sG3 Morr9 +y+]|?37R^Byv5DIVYC ҶΧx7L6zaQ,buQdD}>w5ܻwg=EһZ믳#Ëޞo7_+D(v.-'5_ѹymsANJ寷D<(Up+\_="t1U;hNcLMU`m[Mm9kr0x~f{K}z˽\bΆV/MLp6fUY:tn2ƈ]Z&`&(f[kt1/}Υي$glVscf7ez9~gp.$|:竰QxdUzegCDl"_Vt䤁/!l(%829]z5 yGkXwNdVb vVV+E" 0h)(B\o""烚{*AXT')yѪ'ÏZolɶ&_1؂Q'yrWsmCdX5hXvVd >i2nkR+N\7FkY)BH0T-zӴM9A. xc0Jl{uyv~)V6x;Vd;Ac Vѡi3*63d$A$,XДkjPm,M Ɍ>ygC ĕkxO~CY7b-`n_QE!qBڢz/YjTE ʺ v +!hk" ``ښˁ3:5fj(<0nVz]%j2ٜOWXWRJXU+Ŵۄt#z!o1G[N+i|0k!ւ\uNk$-B» N,kT)VX Fy4rymR ^C #:QwE_$v6Yxu&ld,ws\{[(=$~2e~)A(PAYɺM D!FEp/dd*@7om+OxK#6^!> l%qp~C #k?H qoӁ` z}Y>QJfeXքb!@eQ̊/һ{3~ /r":\fEq3&E0ucLg. O^5Um*'B)3}CRsA4{f|wxcy|~t&C5ZnV+[1l*0};-$zޛ:s_BiG D*l5})sŷx/\~:89*2 ;G>H2d,=/iD7M i#}_O˚BW`sנKIlh_ o<b5O[av*) !}:x n}N?V Ozlٮ7'cutȂf? Ybn\9< of ) p#`AZI8Ŷ&c5k "mK p>QlA[YJbQc|aVgȫKY:ʋo2o\jva@5kmOf<"Yd6@ ɐC,k~ ' qzc^&j| V;%nN=|`n[,G CVJH]"T46 Lp|߆ qťZ7'_$%Orgb}Po<%4MT_߷R&ge<  aawN}9_sBź1CUOcȡ^sbgiw;*7k#Z4Y&#1d8C&#`QSOam}Sz<@vhɚlI+T|(m]G Ui@f5g,[rcnyi_p8~i|O Y="A\Pٛw9$%sQٽF ȕm'D YF0ֽ&OkͰx<0Ox%㞓* U2jmumYzE@B땦bBաpfW}.ehXSbfcdi ])jEjtBT((/e }bJ/c0(r"YnIg<4,YePi$,MyϺ˨W)] ^[:s`̴o_(jBSʰ[[:OHQ^:6Q'"{&bgMA~D$RBQ+Xbp̐kqm"DmJEVj2zg8>TMH'!zʉֻz z]Y^ZpaPl`tĎ+bdHɷ^͵egrb`YWSL)>$\~yܕp`[EWrXI8g26Q[hՑDgG%(>rK r+8&YIݗsuV6<0\XRWN;6rÚ?,[3f m.]n_CFPraȆZlKpݽrJR`UVBLYwi#qIr$6g [ubw̗ _>jPI&6^?Pj3>yAc;bkG(TP[=W7s9ll5 q|]^WSʝUTrU6Ywmq_RMr8C}-Cr|Q$q\p%Vg=bQH=g~3KJL.s/XMTcVW掔Ժ#O.?qN7[]Vwzš؋ }9>m(~!wAdCFGv8Wg~֟)uot8 |~(BZ_ayzw5ܺ!C5 _nuDt;*֐uE%X,c| ܣB@NaL +Pkd4WRfvE|Fߒ+"M/ql Gh슘]>>@(*Fp&OjTgN=4~첍hhT9s mv CT!&\S 9F}pSZvK%8L4+;b+w}k+" %"5^6}Yb{brXSL*ݣcքs-+CM%RW<4Zz~1e ZWWU dĶbkZJFmP @c4=5yIUwzÒeXɠDA=X5+}TrΤL6)t:,Ǎj$JUʸMW6 EǧU~oW}~hXے! ֏NdoN%rދy;$McdZS-*IR-Ǩq$Ʀ;_u"qTSjBQZdUY Ek$Lm_]Wvň(U,fbQPsQ;.Ͱ׊TEVҐsX!{fvR`vz(T9X(3B@GZ5REJӤ g$:.zqie_.Na7}9]2aVGg&$ ;SCcjS JE% r(,~8q+g_л \" 9Ev,jPjĞdםՎ K2: .ge7PQ3wx(Zd g={9.i yzu&oY}լVV*6V6ff@!/jEP|o՚S .(f9NF2+t6Ҋ= ęfsuӽzf~Y;3ݞ ,,`Kjre7/E sŦ:(eƦtAOdZ~s.'VA耾AHDlgkN5KUa PF ($> h[);U0Emgǃ4}Aim]Hۻ(,<. W^gڷ5Gc{1L"VVz>$f mrw݃;(VlhvXDMP4dSX N ccg[ͥP>WS&qjO8JTiDdZ` sS}9*?2@ :&qVϓc=ȇ/xtVkZl j9lr5 ܥN>b5k搚]۫Yx w9e,6^ph!{۠ ;½|u}@W1dAkP|3&$꿪w7B | 6C)>;]6@zN|Eg=^錫S9͉MjRH/{M)@ѓB҇'1{E[[ y",1NAѾV5!7)(ʗfB^=hY5[92f;] y:FwIK4kaKB<w>R Z׃; 8a 鏷7DE{ Xk)ؒz4tIzѪ:DM諡R{' .WSt}Q3o 䊁&Z(s*JLɫn%L7ΛD~{F7]q\ np> jbVqq}tR܏cU4`D4lgެUK;9uʟ^׷ë"{?DHa@n é{=.32)x684#p'0)ʇ~Y4ی>gFr11B# g@F5!7"RSTOvF秶S%Dc9Ʉh H13堲Fyf~BF/oA|&\jru50{2:nA׊Zr䙺mf7  L(3ʐt-H5{ؙZ{1%[\r䙞{&0to(ƃrne1tp ]x: 8+H vEG1w;dK۶mT-EXm@7NY4G|s%>6n%ST> KG->ER.2Xyދ/ (X`֡KBn> %@5d̓}lc:K#Vv<d5FD oX! Ȯ 9LiF0q`;ٷ>ٷ}{ݳPKhYE^m}!v]dt2y71,#ϴ,yZD昅YfKG|o!ף6@颤@?_;ȷ~g#`*?^k m[qyeS^8ekL*$q!c5撌Ѫ6=\cT?T*N殈᠐+~5.̃{ yWmމ>OrȓN8~o:rDѯO=ׄy'gLmH|KE+6e$ )z^d/n}{<ߪ[u΁w֫!OG o Zt sMȑg2J"Vp A쁓붹6{7z'SFXE{ckB<-HA2_kXy?6oQZNEڜ]#$>JFqNzhَۖC nH2rwA=7 9L'כܙk6M=!']jԕ+]~[ңůkĸI}я+ 맓˳u|9W8^RcQUT/C#^֟g zKoOY])k5aݿ\6Xd+@y4b']t~?Պ͆bvH[UNvQN6{ 3̽gCoL06~E=Cuזĉp|cfKvxv7Sxq1 -\w;7J+%Xf`uE6ʼk#DGs/"wˮ9`P"+@^ fh7.ߘ rh'O;tvL fh7C/B$~黣Nsxr8gύɛ^Ҹ^ | ajX,Ny9Jz骽Q*z-Q[tIeU7i{SsW$}!b^zߟ|葁_?g\O[{^ KW*Y~ϟv!wW˔Y9]Æ|k JA%b)gކ `ŗM[4.N?U @dOgro7AX2R Vn۟։=XmoOsڂy>5%Ezֵ] +o + žnۢE!EIO«wmVjmlCs|VUn,ΞVW 9-i|9cAW2ZT&DHg#yMwC60$_`gq_q($~M=]dْ-98I[%M'Ěiu7zPܲ3т{C,6H,H#+l)rq!"K|KD69EhM!6K)Y!jWMڒ%%izWL"_ >$v|pw'/U=94hLnہZqvżۯ™G7R6hUh9v }P:; D eiH䯲>yu6v{>}Jx uhܹq3W?ee+[}Ӟ4J , {V: pEsEUH`LjOFrUoa%ØIwQZCWK?#B `Xu&@2fRw^9h3v-^.e>n޿}dF''e>yч_4yuJ~"9w#n'̅wI"IwPijy,%U/'?~,!u ?De,$8J浒Y$̃ qǽo)x(;-~$ |R9T}l9d@tΰc@r:zS r:!:NzclT֑C@'إG$y3wRZO]q~E؜u T2&cZ!TM@d\2YhJakR`V>4̉ Aw Q Y+J󗏧3g fck$u)6^B{Ub;9b'к] ʾ; m= <G/:1zr;Z-*gy?_4TR]?o U|jeoF|O\rTQ& d΋KYth[SLli_qGyx 4 \x3%bY1U';g&㛻WpQEsx"r?ukI:Fg(OǫCdOhd֬KH#rA idxZ+q8g-ws䰱.%og>5b=`"FD/pք"0~yαq?js5DrcRJj:K˿=C-h~99xBtj6iQ) 8[;& b8 jIzVI&Nx1ZWnj8mCȩ凐+ݺ LkP/Q)q̍#k4R7M|GE:o]2y)u3 g0{!\Zk/$;n{EoaCٵCl˼;&o;v}ś Y-{0B@i1n[,ϘQL#n?'0I'(//%qUawP8zx"<)2M߯pCOcꦮ x&X"Kҍ}>Ɗ=nlK!9lؾlFQn$-ëيvOmI.) \Ua-YNtYh!Vh;oOyF0=Lfq5c{|4!pKގX 0 u  >~@d%MbKh6VO*!ԓ?m2]0A\$uZՔv7DqծV2^ jfYH vYб .8jse,1_ZD\Q[>aw/%pODκI4GZFl38)2uWKy^7 )t׸͔0$M7'#2&X3Pu`wjVCdzV1]Mʉ QN&i$%$.i@H.␌.r!1r1rfEmb)-cŶ g%5l&bðo:%1I'Yȓضxnr>T&/s^ѧz9'Yتdѷs" {1Eԝd1Rr mUc%Y$Pă"b=X?~dCa%RQE dlӅ-Z^YEYuZ{EclD%rN>ws=d+QT}TĖ_v,tzjyK-ՍTmn;cI d %CxΡUmJ"nZי R.</U;>OQrg>KweWVx,_E_5=BP3y-WۖmQ {('}o9pOH4;g)ǚ{cCȭ^| 8Y%8{{4g>8092ssePU ez{xFhS5x7JKb4*c%\u4c-D@5d|N9t!e[ar)p L B(ؚË#<ܐ "<ͮ.8*##ֶ<܊ i.9UKN}v |?{0?.'-;a]9zb KL<+HM3 rΠ3zF!=->48˸q+qfLswۻOwo.y3j j4X AP5}пNV]\GunfwZcUcklZy%0E\d4jdQ~%W TmkAwl̂MGųs|]Ƴ} l'b_y]sJ2o-}@c: Nd]km2y.C"NZ9F_cLF3ӹL̚6(BTG;Yrsd+cF a{O"J-RC1E~[BQ:jv6 'vۛK e;Dx7NE5/N^,<)uڪT+5~?T_ RL,ZF *lķd t >`uG 9&.l]®lbN\7HD; 6NH1}>-ܹTe0撷T]; 4=M˽nFiz2?z$oL3LDW9PL?wDg8QWQuxƠī { ILiU:b;u^ϻ-A{w;XC` yݩV$P6|QOgF/sf5KJkKiyS߲7ӷ= fZ K0#M96 ׇ瀞/_4% ;3 kFܕ:FDaWR!uyksK۰@ E':(*6/Xm̰3.PhiR-Otb+ו}?Z_'뫳mBo%ۗNb~ Y/;qPۢ}"cZ} 2;u_Nfcu6恧k*}hܖ߃ =JXafՖ75m_zg. qDxqBP{Fc##xxq^{$jlёrZm'24=Zܣݦ`hB%g0s^+bHג`8m$p<`ay@THTAFQ۶F+ƏcH3LSFsoc3;fi 8濤#!@Z~t ͗EmCg{2b¶Fm, t9Fx@QQ Ć3idg D̛gdv Q#_Pyx#ukwi LYvaX_95wH@u p 1 L ղkxv\ͅ ^d>;KT 4`FGɗiOfe0>a|Mu?76]_(.S) vk!iEXj >ZEdZs:*b+αgz8_8Ȉ00}[݇,<%)J&%[c}"WMbAUY_܂EN7TQb+ֵ>3<kbܤ2NlZ$d[Z ]͜¬}|b(_|pM,Ȃ̽L@R@NE` X[&oUL fJoQ[I?uTRI4Ip]A|'0ˉ Iib-d?;J蝊*/*(2rok;JjBU߼9'׿zYvђLT1Eh4+zGs!R,Nq5MEoL+B#CF&3dsRXP**zO8omӭUqEE*S&Ghi0r 5?k%4&S)4%C)pMhDg(,bMH0VQ,يO/YչSf@mTഹ.Vqz-]L,ōMIgzzkz%Bn'P/'l2B;Agu1gx>INP4!a"m8saj zbjIgxiz;9Wp#;W!N׷}9vQ*{;rt Y 2:3>#BURLGmy,w̭&7~g]dʽbjOABǠI~=yC3bӖgwohS vof|:}]6=bf 7HnWF'1F:hb7-qmoHY [Eo@T6*R\MJX|pH36rEל!R6o;0'nmuܐUU7$(~>jE@j(J91`tb$N$ #z?*{t3URJC2n1+5RCu>=`/G$U|i`hغ9ڡ#h="%9 BNt)*}rх=\ZEiJb :$sp[C` dXc1^I׳(U@+=kD"93j@S3+ƒ/74 QߧZTF lhǶՠ#a,ܜcx5wg&~/?&eS, 3DCyƿ`lGO{q %` "ʛL"jDsCJ89 ;IB1ySd|9E[\4EjBh3"`5 qiA;hwA;~ڠ?iN ;ڹpArj@sǧsYO4<*ځ3@5+0QV2T,RƸeΎc I \ *|1S|sva[Unsm+;lHkYssJEy',T]!nW `'+J#xG]ٲ"CDnT`UBk-2l1@6R}+#^[3J|h8FT!BͨmfD%zqO lLڰѵ܊A5!lCT+jce_s^ KvG\A{Z7eUP%BӁrKmI޴ob9J(ؼk.gX3 ҅n/gȸAJL=&r]u]{|>P%I/_j)AQ ګ)X6fKUPr%&&[%@ʩ^n2ͬf'h[tN >¶;z=It[L(ߟx+KC7HsC ܆K+bTxjM$+TAqP$ΉK5M1<wdU$1'w\Mmv[qxqJ ci鏵.-" Jqjx{2*yÀ7d3L)$iY`ȱB[m`'/ՙT2&K!+3J*ЧkMz9)-{Wظ@ T8}s]b!8 ,6Yc 4 W~E@-`R!5wdL!NJ*S6Qx٧RtC1Vs)j)d LH֋ZjL)F=Kăo(jĪv?r=z<K@ 2oV8kM^Sq 9 Y޶啕6͗kHwGk2>UBcz~Qb:`/DWG) [imw?>S<ꞛ4.0%8z=7Gfʠ5GVS$Ռ? ֑Wdyd̑B ,ŃH+'xD/z|"B˦w,!uC{bao*x9ow'mmܠ^fv*El"i_QSƖ>"(Qw?d .A(CYrK>:Ӳ9"VKSŹ\OSgGN. N @K?st6agpؗ-8G-nFz;ҁ ?<9?Pٚݍ3huz~rylm bHE:umYh~Q!3ciIl]xZR8q͑ӗLNwvG%_#xD8]97\'& 6.R|fG1 Gą94nP!z>*xea{{TsjUm|.%.d eHrE>숹-R†\QYy(H2>>+pez3:v8ȃ(_C2  eVwF *=R9x/D ^vaݤHh}NH١#2#Dw}݌,[RytW/vRePH=q}SmRlX~nLX5joXKUf&Nw?rHrfˀ?>}dip@do3ke/Qz M0urzƧtbіgh'p]@gwё̻I_1C`?yVul80M 99!(vTޏiVTUT?6u3>W3[$}#ޛ\b9x$0ΌD(n̰}L꺹Q9(ߜ<*j+t W_\}rWGEkKځ{_Ymj=QQSJTV5 ]I)#"% ]J֊Q5iamTND]O lj# G&4c.ouPl3 [YKBDA z/O \ؗ)J?(Yļ;HU59??軼Pm = {1\ŠbʌlZAPC-톈9%aKbf1c&)km._axffdQWnCJv<{-RfuSI$UkU yS8F5&!cCW8S`o} r6wm?2yw7-W׮eaw_𾟷6Jo1u3kI ?}kr9l>5yH͡ZsQGoWd#wILB8YBXNMK8IVpYL=ʛ{\*1ְb=8bLZ z?Ec35B"}a. 0M!.-A z}KA92מവYADulorkNߡe(HGmS`GAgl 6;gGtUJ0,C5ynFViHma_5짥_Uw~ HN@:'`8:zdjz4e!2J"d_<; ^K T=PR)8 &OdNDU N1c=kZ׀9VZp bO_S$yQ`t?{I~$H˛|TޞbqrR؟ Aa8Xߴ(bkpf?z˿26Qk}cUh}{:,GCxZ˛];ؾ98ꀌڇ׽'x#RLy`a{l1x VZan8`dTD'.9әO-q+RahWPZXj94>T}WϳU:Tߖ)(E%8xo(:U&ھ1 ĵh䌠WkK ͕x ݕ}"&LTvV G0Ewj=f64?&ͫxfQM A8/PV/ML]z7wg{NѤ栚Eӝty6i*u_;K_;Kv~n1, 3R88\Qʐ1F'i44$9筗VP;E:mRT].tIrs\I? hǬ`Cmx[33ަjюA:r%SPuY*y9K%/g]-1XuVX-BX1KX қ`E1lFPa+fe@ErY:x;x&)fH & }:7M>%^;\pzMS8=9hJ>UHoNA]M:4]Jx/BY& (;fPw# aIIњJZ>•СcQ#y[ۧa`V?:ywW 8od 5(˓zA&N[;i >?cU^ 7w̡ WO<"DpٖALOcvQ\=s ]025\cՃ,pa 5BTZ@UiklnWo〿7LSh)Mù=;]NѪu^Uw[YVyO+l4if}S~*[;8fB-W}pѳ ΤͧV}yEW}Y3)$(7zIOۧEoQT66Eb;ɸɷ9CdkfEt:<_!yOeLGH+re?5>"ʑ89*ZhB&60bs36Zbnjf(^*G1P "I#"`4Gc} `Lw#X WjKmAQiQ"գ :1GcaE%HB`ۈwO ݐt!|+m1KXIݲ#ū+Y ۝_^IoL*!D𫈑@Ju`i@j"\8gJB)1P#M)h)% }~ěuP|%fD`lnn5Usd2P}g(!*v sPH5YQ5%u62<)/ O3}`Ms;R*NG#ps7nm D(8f|1}%F3SKL]b&WSNjВ~wj ^ Pr0hmFu&uL^~woҒ]9Y+,s8K2 e#1>g}䦗XDيIavbOdI}v8vLkPU^|mӅr.wdW}D< > fHlަ%&?prӡUߜ_-MS#~=#%T}>W꿔u- å n;`k(zAz,UVLZ>2)l:1HZW9oZ#@RJ=SH2 {.nRC`נa:L6M#U>Ђ26BWӀ ._uS-(CXaBI!Ɗ2Ըd 7ːs4GU ʨTUqec2筦RvFhDyo^N䘣 %![k<(12#Lk=3FyJˉ^pJ^%$6EB0" (H!=ƒK%pjw==zSO]uN/sBӲs3nwۣPN ⬹WDg*AB$WvHQT'o`p O uy8fVFNJ(1)?vpVθf*4<9Fxyk:R=n~fggVY*Y?du?҇f\Vbh4]{ӫ $Q]rG:f|S9 J|^9UKf܇Օ,NѺ$sEdʹtܲqby֋Y^_ugL3gLaK0CCvUێ/{sa=7M]")O5 ]qS-bVYPtY:^\5kjt9j56Q~M%"v'gsDŽvQ9bMVk۹wӶS)+ym?^u7nOI4l=D͎:)T׫hQiT3:n1BZ>m`tA aq;SwF1K=6NnJ';긻mkq<*#^jm$Qb.=E[izVj۝Q{Ybb#n*_؈*3F2iWEKa@(X<􈇀(ȋ?Q )4_Pޞ+xnĦ,BrdV>4n:}q2"xk΁ĺ0|oݳB|N` [wH9)~| M@%W <#+ϱȺ ZKTaKw1FA`/ߖ3뗜AGNW+hvt0y?}9}?I9ȋ3>_/->=_3n<|ׯOO?L?shK|=}=O)} pֿ{}L]}y3W߽?]6>5ԃPӂ7(R~Cip%q4z8sy_puv:9ûo_N@&o7nP^,T3-SU{0}_F|ݴ˧xTEzzW!w ?]z>zssMB1E㭻9KLПÁ9?=y~ @WzV~奅ߡ^y;}f0q1+zu=z9H]H )>x<q'bΟr8ZVzfwvOI^grhw¤44+ :mW( پL^Ng_O" _/L SLPQpiُ|^hqqn/5y=4q?/fEgտXGz,c'Cm5'!_\BőH`z0oKmZHj=,pZdBdL7/0iX|:#p>> &xXb$uܻ;Y[<}Y;$yw%Rw*O)XOeJjRqUcB$m#W|Fy5rr0/)vj.J;8b3NX}qw} ?x7) hQ)?uU>}Q(noFSU) 5s1UūjBhIj(N8!yRE >ȃh^ H'trjQgL|Y/Nru8Uب;|狺<=mǂT !TTԝ`|B996if%p0 m=%',N N8!\EAV6Zp)a M٥|HS޽{=IL X#z^ @GcJRxG:y1Hw rD On>u 1Hw dcB%7f1@1l%$+$e ,Ro"L;%w)OKF:64hg,LVC"僳~4f#Px'Vmbf !{&ʃ1DiGH)y(_QI%9Ҟ't3N:䈌?g3N:3Nv'k,GL{lF1J&(Q jb*-X&U_,-rc c 95Ϻ?vO:ǎ?ˏS"RKD" D&2$TJ(gϋ?2ӴsYeY&2gH,yCɩ"qf0 RJUxm dY^FZOKK9/"N3]QxG u7qQMbΠ,wief^SPHa$2a _3p;kX?jAG8LqeGx3rx‰1 %%ZH!YUdAJe F;fb-%rߚ}<pG$?qzXQz3(v3狟p1;;@n\~sZ~*+F  Y BA$;[88h-Eı Uq*}(¬Yq}LJ#:8ZjMFᅢ̸(}p7`U\DgcANfDjT aአL(M؛,"Mp)(ZM=˳pX`z_4 Te;tӔ u NKc&)k n[0L8XeGQw<;i!pCKpbzjr %Ӫp_!Ѵ_﷖#AY)nqa~}T fΧph6u+\j^+V F[M2X:_$/'8!/(/ΰBlf:'Xd!q^*$R0 r:38qY5 zd}!+$S2iIE&91u*c1i0nE[d)3+Vi.]&% 6r4mp"ܦ/ʷˣSS"RͩSèV „ X!%" 2N7XIƏƺu+_*Kw4sǎɷӰC2ҪЛA](w'434Qa+ik&cXRNWD5+X޹0y0zTkby0o`YBS97ْ9?Z(=d4֐!6'(rz_w|$,IX2E?VRJ0%54&ZDk9#x6RS kJQ ,X^ԑ"<CX5Ɖ,F)\"F\PRfX獶A]հe, -S+l-D3k\S,R0J5kS8>]:b-|NZ׫&0kNKs1w;I HN@vN@PgTSmHar("C\!KL) 4 .2?QSr%-#"׽S=fВND;u>$ym/S.))XjLP`ܘc # gI8HFR>ÈZQ%!L- `n23 t0#{ ܙJ°DsX9Z#o,E߰"B54(OM``b_?Q LRfBHRL@ ,‚5q#N(F1C ճRaJ'C Ǝ62\*DYxIANgHtsa+-U-)RP1\rn?|?3"zKW:*aQVo?' Uٻ6cW$d_7X${lݗBL MiIk'?p8{[2^jt}U]U]7`8è{X"o x)oC^Z_Y%X56Rwy5fZ߿"^n~@} hGɇd2_,'/|hi},߯}Bsl z?al{S/V"8 D/B܈P!RW[(\SK.aZBűBKj D!ҥf:1<-HGG1*1Ab"ű䙰OIt.b  @唉j`#Wn@oB,R2sxeXLlDEPP4&1z+պ![L*$?פ_l?X&KF@(yUD\ߟ!0V݃1->ٻΡ$>JY<Y{;XL>|izG Pq7嬄O;6e, t\U@j1LߝHdp^~^#cT: g8rJ+K8qёwlJBÚ+ gT$w{A,xNi5񰼘b#G$b/ap"LsIM"烮3Crc_M~|ۗ TL206i80X̩!k #a2\ O$fSaAI+1(~16Nhe1bDc *7vJV+M}k;vavvEV\45sf"sf)GpN; P : b roAIaԉijX|!5x UT9pHeeAUZg[~ 6=Lnv;ղ샦XtܹEpv-)/䵝ME`W0[^riff'겵.}pRpږ$䉋h}Nm#yI}"@Pxx =p2־"4>G2,- )A(dL#BC<#a)}4|RDA{L@8a䁈~bf9OF iWk9VsGG\طv$nql4Qy /ZuGY& PpNa%kNq8yRp|΅ L Z%bBGB^Ň~|󰉑l/+K@ FiJˍWZnrWZ3[b$>9MLƄpSQ?YiehLp+/jʨk:ΘnRK!i2gb]O,Ua,81gzVPxeLEՊPJVB܀of|.% ߰jg9K1eD%Y{G oӷw<|ri/ņ5 Dc@HCTj&1@e=f͜60 ސb|3ovYša,X<\ #ﳏWoƬ*K?Uyc[bLxVu³k &RP4e8zO*#LW瘩ʿ~ޙӢ^ߊUpTNa^MAH]S@$;RcWAH ܬ(\0Sg|n~o끣L%֣΃q}q/{~%z\]xJhަj`m<&M wj$BD}HIe>*ڡYl>tcw%sGEscָ3(: hBn yE EN8N45łYQLL)р(m"—~ M%11GY$1Dn`M0(V~T22JhĿ<*%aT2XP*&_c!X {G8w\Ř5MUmS'D> S[Gej^#\v/ɐ/=E#[Gah-yK)yG!kF[ȯ/m"jϴq݇b9q=n0]2#8-I73cMfK7}s0 tGfޟy5zo[ 9/ꖣ\. 1r +\v7I&f:Z⠫y`R 'd etF.r}&owq~_BxTg&z)ñs}BGʴ)l)yۜ}U>tEFnͫ'`VXZ!mJI7J2FG+a@^ cIi KEFv"1"XQ/}rطΚ&R׉?nFSAMH\͗A$K ̽sܥ+_S^G{9b48&.-^5X$ mIlpla[ʔJ׆o:#*ߦֆto3(kө/p)m3Uhں8]TKl,{Z@E51ЉBOD(`K (WIJ`(nl^ ٷWqh9暵4^殚VdL#R-ۓ 0OUcxr%Ac"q>|\ULA6^ylXL>>=C0!d@AMXsQ\2IhiIaנK5s[0<(g%<ر ukĻ#LQ8g7&LVMFJkS'J0[\aDW;!-jt\,FKe1*qTJ1PaRl u͋xHꐫ#ꐯ|QaÛ m7HP;9wTa|C<3k0Q>v$-_14u>|(ѹ˹B֪d!=ȧsx0Tܒ``;YX( 鯧eU5M VyeF ,QQ|'=MIjsA+7gk*Շ7A ='YaEV&q3`*V"}vm7*SpduK4n (n}uWa|KgC5HV=Vw5wOA4p>M`w %F;X!?}yQzfԳ:سO6] Q+03i^]:G!}_drskx-ߗk JnMJik pL[fг20#̩g%71ISRuYb} 4'ס`ҳ!Pd0'[ݿ}^l) '=Hj3bq|V α?,wd4~ytPțqI*ϋ36?Gɤr!{dӉ\ҋLP Ys-!OhN HvK`$ AXF`FZcD ;F%`Yb/F;w.lx"Ji=sܚe%6Ӥh4wJo H)Ȑ"aL(NQ1-edMB"`ı¨85+ VRP|a6Y~ɉO`(b&btθgVQ7+ħ!e`G8,r(pLc=[@K`Z'6`5!CTj1kڭ۪*i+dQpYdbS@%')85'6p J 2i= h!0d7¤.FIֺXjkd'׸D;ا) %#F)#!2޵>m$O=tU)˗J]vRaL[">8_H xI9\I"eL)ϲ챑Yjeq,Sd)b38R$c:Ke *tFiΏ/ni*&eBvɎáʬFQ(JQ q$&f$vMVbe66`R ;i(jP!("銇+>dǟ{\Tt51pRVa:hDa3te/2-0Š,bڒcF%%1#gCͩEPV!qH㢏ŧ*;EقtnMH%q$`DZ X $H10tvxJRc1)Jf`D(P u:*]"7$OEUhwR-jKlncw~8&6^iz9p$Z?Q (]L}jza.q>wG[jE7`踯\T~v=>v|yVy G33bߛVنS`r 1{ztwW,Pb}swnXJ oъEH2ʹ`MrHQ;׌LTG|aъ=1.\?tܣ^-TY!DrCXDsK<~=e.$z!.^m+:Oj[&|j&+x`5~^\Q:seDwYjUL@zRƣ͔%X)ϯf.X ۀt)+<&K(eT Y,KVVsdM"͔4URajXLTjAߢu&X>H*)Le)"12ĔR %BS0&̴"(w>f%qL|N JCMR aʹHD8.MȤ"*' H A0 ǠłVb@a}CNȈ$vu OqW Ga ?P-QRlLbRc!E;2IDULicQ`o#k:bN(uw-(Qd9/F:q+22&ɛ*LK !986K2p'BTS0ʴH'`F-( \Ѿ-- t@8$_P^g9ֈyL1cJBHy|c,8bR|k]J!Vu h[N\i~(K8x0;iG8H)uQ2j m|9 > ʱ%IU1eK9")갠J0R.o#A\@vMҶk݂)'+uXy!x'j#Od$R0&Ƥ1gq6 SUZ?%/֡Ԡ+TK Γ^!]V3U #cxtJؕWv{ mhΫr{ s :12K% f@T +$K K@W3G+@5)3Ǵӯ oiN1FAnx0am.Oʍ1Q~`x…s9J%!T*1)шpEI`0QH 87¶-{tD6gκ#bq>^p*ԧ?dzWV=%p!c͕Dr}R1kC8&U*KNC8<$% 1|qmȖe`+!Ȝ $V A ,w#w裂+\r42D(q V %\]]oeC1 qv#&\9}QcƄtCٲWHo!U\.v*V\L v2k'C2Xq'5QkL w>3)1&qaB Neq̱UvVޘ]S[Mx6,h:>g.\o'X.Tp.Č$ g 51&irk1˨fxbp2aJAf sRp1_ qeH[׆GkЌsϣߐuJk50"w x!՚܊8g5>н5|YHǒw Kٜb^uWKܧ^S^q|(Xw~ 7Р¦,Kd_N{ͪSd9C]BNLGxQcjU*4. C/+a3Y{lg9_oB?櫩#n5 9L3lG7  3NKܲd%BrI4{Ukbw/)j'T-N媷_ZUPq)n Jx&$uW3kUj\H5GrIG e ~$K$Q*F&EMT?TfLZe ,9%g*fzJ̫S+1?Ӈb6wr?L5 wá>oGێb#yph8ft%aˋ-^I-󦣮'/6rQXjtx%'hSy. !e5wK%]tSWi~ZeGG^VyGϿ#9H"m[XڻAdL%ʟj0$9ɢ,B"f^_wH"¡3I F܉abqG2h ՃtL P@ioīt?5ϭF{LZgIfAMdBkyi~bAC*i+ ϧӡCϣ *f}dqH'ȘFE됨,+oKN 9d@4a!I@PRFo'j˴< #(;rc)l5]tѻXtϏo~oWVhZ]G|>?~VH3B~j؜?i>x\i<_NJ[Kz'gO1C._(O9810-yVD.֊0<|u"YqkE:ܻ3cXAks{5gZ3@\׳׏ ð^%8`1xh MoN {PxE߆DŽ40 ^O(4 9tq]6<7=dB&t=~n@{z=Hs թS9LE&,]$BM ]__Ʒ%0}$$6([c.ͻrRw3bh P6kSDrV|YZ?Ejn拑 טP$}bgUAVEA:+xά_gu4U\1P[(&moT!IJ<|$ ql{C!242t$Q=FQtP8pٗ _i<51ͮjnݤ8SFiWDE1Ot (eVTOq=a2niP. 6p6oь(Ahڀ>>Z ƪX8zڧR O > "C(h?`WBJ?> zP>0O9>h WoR@/BիTdꔧƳF =BR9W*CKo*Ë#gjz-} Z^epMe35#^وuTրa*+w}#mU*t/k+Pw|y/*!vd>A|F085:el#WeHIs~aЗ:klK_08ym*86DǗ04V횿n`KuMGY2ʺ+w`Gngtq[s, ~jCZg1^H2~)^(:`Q> O!5`ۻxש:p#4b£eWj '!X%ejX[{>6j#hZ2֠ex >7^IׅK$DκW"Ҫr${ jMTRIy4e_)F*H*luE:f0,I&2T+ڥlj.W36(yQYfiJv+fzJA' 9C49vϧ{y5Q*(='R# s2FSH/ y5T'3/6iaWwo:]ͩӍuNxi ( bsjo4 nqRvF<]ݣ'Wj;-RA>ШW:hyZ2>C>/djġEsWA;w_l dgjC(٥*KEֽ;`;"E%'L6@rތVػك}o_K8лGnr10<_ !stJV &;{7kFr8y8Gb1ށ- qIZ&ºk3U_PJA{X RXw46-%s毎}(eʣA Z#N  xW{ZOc5UDû}ۿ n}n'Ս(IB2"F #ifh3nb8#9ҭન2NǍ$';Įt%߀d'I.ӎ~X0˻z3&|jGLU6M RKix¸'4щVibbYFIl$Trc38WYj/Q(<s-|؃+DDY&?}l[IƵK= QdİV&D Q)K=7n÷g VgCkn#N(j2V\p?^]v{ ~WWe*^{Ƒ_!إT6 9,nlB?mndRKRqEU&` E"SJ:L0MP1I3tܣcZFEy%Qo^XNA/$g+ /= =y=g2@qEۑ\ޑ!3#%VhyH[$V#)V ]r IF<0N2 8zj!(#5Vm(ҔATy;rLUIm9pVF@*CïfVj3'{v1Jهl9EKE=~I*6͖eB\xnh:[f*btVU'\ I·Z}P5gy0G !"O+./n;.Ca[gw]f}sW|2>Y{E)wqYa:# EH*Sibϡ= lU _m}Rm=w^?^O]sOU^=ZS} ge) (w49+!V}AgehL閞@[Q.ek+?n xDZ -' 12 6&5<TBz2YLS0W|lXHE =fmC㾠ԉDP &M\2ʝ/sDvZacnux+Wz׷k>j(G^W/o8>X-%94v5>@π~Z hm -gdNUΕ],۴Bن~N V!T=ϔs`+$;[yKkfk=;VS:;JwFcabԧєꮪ^׶j)H5_{:+)@I4H3>> SRZoŵ6+3:?iwh/f2k;.$詵T+21d4 $-Kdxн`U ‡( ѣ4"0OP .J.z7 uF 55C`0{N5 =ٗB5t?{jh{r㧪q3d ~^ %&|l=q+l8HZ-d'i$"ךkbtzPmn|2^swz_Ғ2ܹ "^hd~'&-ƫTC?W:Y}ذս t9"v/S jv篔ڝ .I7:;Y?_pJdBEnX[.PYIa媑XXpEtuݫȹSG5*s8Oii1,JCZ|Dg j^,ݗm&D׈eF15>CV>o;v].AQn@E'|1R\u hAQ!PjJk|УXՔ/uz@UR.ꊱ>&0;[B0д)$(uLwܕXRR?H#jnP8D)L`S&72=?YN]6$pe&-c`>?Qh/gK:"R5;^Gfgm([ӛ[ӵ:m:~w7W7&{0>hO,'nurnӃsq7_U:(͹4@']ҍ q=0t4mQ_=e]X~:8<DF4T̺[ݾ|uvnTn &й%Pލ AHy6E+EPKI +^8n@Y*c*IIDxzEE cm I}G$KIM T`)֝;F!7(D_v5YhfJ% d0 ,u'Lr : C]#0?ӈp b=o=u>ؼaZ 1`l6_?=8w1|lhSUmJ#qȫ?},W1ҸjymFJ%kWMA}Gr˦KDPd׭|]5&TVͺ pf%J8D"40^p"R(B$Q6$M &$4P9;L#%PJyW,\&R<17DsJ%qޢ̜ AZ"Jd3r.9Y%?zVծohFf3\l)j{;N䊘S" VH6`fk3Տx.zք[Z=\6=[?=8wUGEOB{arNąyN1TRVqP%B( '{ǏiʹlJ'4%t$)u 7$Uim%5u.2h C{W-6;9@ ?v#8ix)fh`ݰV *ȭ͛Euhqh2Ds|< 7?//[p\[0iv[V9nsgT kO#4C S8PΪ|~q(ub|$Pb9}.1myS)?ުԾDp@J:~]}$-XmPDH \݊{Cb+8| ~t?cwZ;tK/́8qyc;FS;MpJCBΏc%4'ٻ߶q%-6va\hxomwpE@RTbԱ$\?dlj%KVE2Er~CgáOi,*0t<2hд:]D]YYoXmϻ}MYeW#i͊/SX١yc)kVGqV<U]_okJ宭/cu!څM!`RP)NJ䑏a*-HqF1b,'vV 8\i'ء#S29qcn;&O}Iއo$yq*t/.BT }Bɾ`/.>tbO/K2h}fؙIFhkU#?`%Ƴ߿?bC?z+崐%cC} `"@n?=u~{?o??_`Q~Wď(GX?Ynڵk8>IG|4Z<:aQH8O'mK/?45W~rYߟ_o& F\$躃^%I "o&d.ݼPړBS&cePd<?ɍg>#l|a+{0VqK@߽0rt C kK \&>h82c`)W AzsqS$=A.Ctr:G67RqeF0,zE^ǔ"M ȬQ`SJe@k0}9 TPbAT`bWoY{J0!.;xт`!W~8YQ0?%k!nbۏ-S5V:Lb7ĤrӼ8 W6; ,2~ R4`]/;3bc|]ǎ.?wzlA^@m`-YM+r̊ΧJA97D>>7D>9 3/'M/13"v11Ɔz4g^p3aԂ.v&)d ]Cb5R=U'Rv&ƭ !n>Z?n P6$~a<#Lgc >м0I᱅9 U[ ҡT"ҚTmRI8iM#TZK,3rMLwb"iz6gɋ{ru\x,"iɻW>8I )I !n, DYjHSs !D[ ҥI`V]MjL^^|0zOZ,E ?;kHym,o8@'yo'YllI+ǵ\oz 7.6ήw~D Jĭ&^ɹqtJQJƢ\E>uJhe Sv8p>£@!TqzbzCwxpo{yفQ$Qt?\ <DZV+bNyD-YM#Ibpb1z*J-OUb~jl5G2f_ e=_T)1j/ ,=7feI3);`UN@ҪS[UQD~qdƁ=1SH;q) #a T{jket5 u)3晡:t,#{VF*ͭofi^V2!j ? 2UGA$XpQGcz>lhwf^H2o"]R f`µӸ$t},SW`}{4gQf_n*X|Ϣt3l!JdزskQXv^Q.ITŰ @`?fe #oh*<ƈ BM^c> JsԳBPO;9S`t"& EցT"C T:/;f?c Z sl2PK;`qG\39/i "B!ש*$(.%0d@_x;Qxoj`6uY{5Bֵ{Ү]M-vZֵ98@<"+2֣DžsU,y/k{P0n<Tb@1i}Vy=%Sl5ihX\aZ 9ihiPY߼ݓmwOzg- Ȯ+Iixo>)PnڗnJ,Rw|hldS8tpdUOi?^BO6Z=ʸ犝"OƢeGaP[֕(N ׏82߯ BݯS#Uh-4*xD*}a^ㄤ401?'M(x:0O' :Ɠ& sn<='M{}sf0Ğ{ޯ׵)޿ג❢.'{`^P{^,:Vb)IDCcR=w`921Le$YޕʢdEI .5yҬ"^M:dr,yNzā7K& DN45V`Atp]C `~$N& 8we;a+xzҼ?S}O?D N|1Qr&i:KwKvyծ"1kk&Bd.+ʻcD͛'j xXы_Vw$|_F۟G*/0Mf>d]̿0wٷAXvk0|= >e{9ww2wd󸂎o򕸋|w`x6}QڤUEة愝 B*b$ТE+h;O+@61ѝ[1i,h!򪟙F5v͗7_b@'`5>,ildq

9/?at9OfAҧ5I!c_]Fޚ_°7w In4)4";+yݛם f=lti¤pkNMHOsÔFBK0]bT7^gsɋ͎?6(!vg{d#L>p"<8Aϰho9Kjo9K+DcٗZ,R.ѣD+,)n֋5 %Afeg}N,*1TqL1ӵJYK="N Z|醃M-%*3/qbD}(AcDլӖ W9O|SYRۜ|a s) +VÖ:xKXvv %jsnߒb%%r+iW&ND^"EFǔ adbLFAXxYRv(xEw0oFQ3uwC,XW$&7 M6;k_h`zDjQYXGjXZ($e#IwMv1LfLj8#>h őbZEIFo=X]9kxW;R&{ZWWՃOPhB/CC%HFCmJ#0$Ru#X!j[G%af;klgP%!PJ &nsfV#B@er1 ͝dgp֕;H̎bya4fZ' ' AcX]";8% !o4w!_=[7|;Q/] 4&K{9:ൟO2ZN BX bt" q 9?d$z-|i#dLXRb:O$6*t),epݨ&|sLTP)1T05"" 2\EA1mlՄ<LƑ0ꈥڐ(N0=c!ABFxIq0c}uɣXHgĂ|3JX &&4Ҋz !鐰p&7F@Z7LV g|w0A7In4 ;#(م:}:Wt<k?j4 fC|/E4mJhrjEVS7WW ?{W8JlPȉGw{c3;c86&xVKWIkc(@j환.ԗ@"wEf)O'[BöLq8 gy~otKtؤ~.Ħ翄L8Yz{v2{~$xC3@mqLҨ|39}2~(% |_UR12A/밴NkQ^P/8|X t4=oq;jkv+G$( Cx)Z:S$oQ؁qfOg-] q 5D1T.o~;H̪ԬCӚzyS:*'`t-r[Ș%`̫T[:FTE mD}&JP/ds3Ym3k ڠD?>rg VE/K.MXf ÛäYE.b|z"!bDN Pʄ`ű 0H(F#GV:d MsJO"D5zN߻VPOQG,S)Q;r#/)`6FFyL;r?Dc\yKg#T?tg+͚+*=ZJdl2+ ݮcߧ뛲I&Yߞc9\cˀ&o`t;*@a2B&^`bg,[RF͋rt7wǯf+W#7Mj¤ ]v`ag<59[8oEt8 &M.E)d0ExM+EUj-_7&>'Ȯ'A[:0c"CwJ(Q0lB}Pz>9$U){CAdf:@L wfkKq0Y eN"^*[fLYQX!tY-dlL W7Nᵻ6})G.<#Z/N'Y{@ȅQR':ȃ[u'Q*{ Nø(gSzo)7*L»bB.(hi.Vb*{8vͫUo*j[jԓw\̔tR'Lyj9FV^2Dc`OslA`‡)UP=as~dqw#9!>0IGO,p :Z4puW&Klg~OQwܒ"hodL \5ܦ.!S|Od`31pDz-uFU~\i_T o߲̅ N^LMjp;X{>ZΕS+TcS'Z;5*ě;jM=y ] M/UToqbTg5=1ZO um|b̨x3 t2VbwB*DCѬc$KZv- :(c%I443 ^9!-{ Ჵ6p!(pT÷N,h>.z;}Yc!=GT6ՑizLXcy J5SU󸜭\Hʤļ_u4̲(i11`rY?/9 ;:ċv7\jol&[ f T@7%)A[P Ky+`F%MkħNw0%ݻ@UA~rٷ<\؃B=e{zFt(6bA̓Z'۲v6 þ-r@ L+5 rV8Vam'-s b 7VZ5o\́(V}9x<6lO9Ͻ MIjؔHK ؈]P?(s!;@f:>y1i6%#rߺZ) D5 /-cc9BJILޡZ rlr¼3xσ.gBpXϼW{2Ia4QwoSԝA4(W)}zQًB+lV2S=ƷzqgTcfuSBoMQ8u/R}`C++hy6;UH7OU'l([AbŏgLt@pEܽTOUR!=?J{N9ri3,*3l+A6Kr;6Puꏱ= #ڥ L1Or 3/<WĬ_j`j"ɭP+ƇQdaRu8Owa5+;qp⼊p_AGP`jjeYQk0"mp3D7NV3Y钿|@Hz+ܭ?o+1nyN5m8nͶ(0{Q$|3czM uhR.>nfcl`RhpQ|ȐrT4YJ0My,(H&A00ŐPRHGIo`uU6p2No5mRoo>1ift@K'lmp9l/zf|X1^Ɣ|YXaOQuVAi<;g{(n*d-BǏ$bH #NkM8[Bvk0!C,mgI >z՘ŊDT`kё߹@p"9_9V!tN=aYnk/V.`H^?RvN&fD])__1Nf-yP;bBH҇ Ƶ O)#%_E BxY:K_Js"=5TSfD+SpE":/="^Y=HJ(b~EGm\O]VH c8V;CnAs䤃7•w8YZPOckc%~a:Ki %(ޜVr /݉DJO26"z&*0x*e=1v+ƒBl:%ý,81zvjXS8jqVҘ'jQN>o{Z1.Uӎo$EGcKZ/$/-Һ j{c أk8\;JjV/ %&B r7(gp6bO}w9x,Zt8 kW{sn%uF&H(ώVuP{1o;=*a`3V3Pd1]iZu̻S)(ZFЪ>Ƌn!G9igz~c% diѩsy!X4!Δ >3y$'H&\vG.J1!rh9,+=M>u18Pő6W%ɗER^>ݖ o ~-y.h%jH~/[noVa4(|z8yZWJm/#R`O4pOlcT>|u47-?""@8(n7M6Y7__F]1WLwj] 1!DgQ$fO?0Әg{^3_ LjtFqQ?n֋U%lޮ>T ?/Ͱ\HDԓFHO.hJe?}tTA<6Z!YkBmHZxݯե_4~ix]gJ(Kv;vMS, %)(LH&e"IR˓;Jy7dA܇1t~ix,9עDB. ,䒱8*/Sb zׯS3S\fiQ̱iF~F1v}6JBCHJAmmֱ =ݢD¤Tޥ'qnS-.^/?}X9Yu8ur>Hdq&3ӃyV"|;'ȭy^N|Yâvc>iKɴ}a}99Wjk%}pG޵5q#99J_\凔=+)0KBR[^D 17r9M3}Pt*Q} )b1)JffQR^yRQ"Ehٌ=mQQ*7SĈӠ6Wy֗-]E8Qe m5hL!Mh=ޓJR-2@ [vbŻcb bfR frUo<kLfz.n:}y6_&S/k0}r?e&5c8.Q1xk4US-r*tkv \4]h&ϧ/!BgW~A| 5yɃ(lKD+v39 zqBsAOZy?x4?a2σviW 2xr>RbSs`c`2v'BW8c@:d\(@dνOy?RLnYl*g~?~</`CTvwYE̎pߔ|P/3Hb3ȓͫ!DNkyٰ*}Z}}f9(Q/GnVL1k1 ꊮwYFf$Z<ٙɟLq}\%g#wl|xooF(>vx_=LAKiRChաM"t(cR/ h?ߖ1>Qܞ@+R3tKi{1X#FE2C:cDsZ链-i17XEE1)Q^1gΘZuX%{XGŲQ >G6o:mVx}sZ=<c6Qr' KsjAa9e\l$jp)--PՃ"-l-x٨Y&kZ1 ǘp<0@]3fP9&-fT3K0\n9":e1uۿg,(8E#ݺi?.|:]n]F'@5z.b4ѭ=Ƶn]QΗ:iwf > $1361i=}Ҩ$\O6kϹ~r=]̫ƶ9MH5<5Q0]i59Q\;sZMci9$3ㄓF٣XjjHh.;]&q4'.HwtF";qb7`&)v(!1ݭS{* 4d9GN5E=ώW7Ό wj]`DҜo:*j,,X̔R(,ٿ𩑕q:0*o`ڜ!@]o }dE!)&9h`)Jtڈ ** `Nٞ55;GkDj>|:}- ߪk&>.1cPLc\7Q:}Wa1cPJ V3q7#> ċs4)1@1q{["/;*o'㑏}_&%ٮNCXTR:GĂBY)v`NrX^WEpMJjaw0rqBy<>~,wN}FnE8zB|:<j!ꇸ}yۗǘ| t=KRNw:UJJľ:,@˵}+1) P褧4k%Ax_x=kJ8-p=ws30&_z C2>v RФx>qugެ/i/ K̽H.^?&Ի%4{5P2g. k!= Q(g GBr=]fosőncד|]cF{Uc_ jUo*SYǣ9jd%ktʉj]AˋNǣ%]pxcT*-{Z>Rf55^ BRvG$8ꑸxuPZVGйjN0{-BUnVf5Ch8ti"`p˺o6F>G>yqAE35U7xX*J1dGC8QAzx}yUuۑ#nFn\fS`V~,G콲r@"2@*ϝ3)ޏqhܓRfGwwӷɘȆ*,XaQH9d9VCG9B0gRU1_>]d$乆Yt2/m723wϼ!)Kfɽ;h ?2xgS󑹙mNӸczΘ5G9trV;jA"uD`f5"Ds'PqAŁ/_=)$UD`GՓRhu>. ĚL*Aup43q>ךBh1iaR+irab$ мpz Gx~2gJ$Oa=3i  \d)rk9BS9Z[⤕<^'3x兪"˝hZLvp:˵gNqJBhpJK9Z˰*OJ-gdcM'}?)gˬgkgnןkYS,MSNǧ' (jHA9 I])eePGEi }u7ޯa8yٚ7G*H?/([1Z+l007sb0l\kZq4l. z:i$^޺ ޾3ƷeԎH؁Z!E]f"`MiL.$WI0Jꄄ]5@wcAHRL.$,HĨ BQ*$- tmz"B.,28{ RAi]HTP= IC gzhQC/u-zFc`KƉUZAw6jUX׫N5rh0;]).zWJw KRi]X8jCqZCz$!$3uNDJ3_RwiQ^Jתy}{߁"\Np\-z `BLݡ]p0 k(' wN '!*=OE>034+O'xܼ 4wi's'G7͊!ϮhMh SM'yJUS9?Q0˃ ޿}]z*mMLG*yV!MGW,ۥc7N[sw2ܰP|V ITh!ʖ4(]β|IwJH0. Mlh %PN~+ošx_6]my Yi/|x2PHY?^0ʓ_%)(8ӹ#Ufh ѼF9@(DE{\MS١.:visBTWr,QAra:: IFbh9`cDa6ļPƁZ'/?*q}\jeO.!s}Ja~fU@$SA-.2,vY+8Q[(A5@Gt*]٩"GJ_ G$**ES5}Rc t^iOH7/ juХ 0yQLÍ{ D:<*lU\PEE)'|D'2`@ybگW{.&)W8.@q'iR =Z~ivt6Ͱw&LcVwt1f5\|v<%L3o/5n8cnUIXR6g+ +-yLc3v%jGXB.릥YS K,)2d!nOZ[nG8 .a#.}[^GMGʀ԰N{u;N4pNLqkay?ɰ5~0a3P]62A*+C@Wڤ@S 6&DZ W vhSJ9`: ܈ůB29ƣ1P!GͭTH| pfCUsڶcv %>oueRe`'iEjg-R>!ޘ,ZW >TֱN!=TJ`kET `Q\RV60ꦙ9rpzuDzՖ4sg4 DKC'A kw 琤7< }B8wS[TByƙjS ,,QWE6aKl#5S;tuaT <1-ˊ#ީ7-i!G *IW.2uˬL]6f] ^$2|aﬖI(5X&T4Hp١w)|gqY׆:eum|cv,&IB("t CS 6/B@y)O.|CEO{LJ{#7 Qۨ=Hf2w^1L+PgT!3Qׄ4qeQYSPRF xcNNiԦ0JB~ i$ oh5&dWX3k<\^yᎡdq^!P]Է5 +CNH+Z~nbWݢ H각}n %*=W8@y \,x^͉B$0e *_uYPpBx䆣7-4Y8$ /Zy1Q!%EH5a-2+, -$ T4<``([@g06*ZCJ9PʂƕE 5JIm@3i FU1 QVfFQA߄BAm3#%"*7f/R <)8P%^snDTAq_$ "GVvK%*%~LPPB%PɲU P(Q %B.D:3A TpU2Qj~A ⲝFNJJ/.wwKJiT{ 8k$ڢB(̳|s|J;4UYvmo P9ޡ~`ȯ$n Wcm/D:II'D!ԦɄSO,L#E~5?#ifC~)29wbϸ1n>E;]~ҽ ~4r{ڣ>*t&՞/-ۮڐcWO ˩HTY%B2 (>`~1PgcAaI1IFY|Nf<iE ġ`ՖCg'kV"z4 %@14@a]i ;RH&9q<'ukk3],-r9 vqm'y΃HW`즚Ͻ[Ku_1'vB5;~+&PYxNQ**I T59>7wǸ"̜A >=۾Fյjr2$uR2Li=޳e<-h?LdW7~Q5NE]8`f M52* Wq|fE6\D#_ө Ȫ*iqgVUm(4tjn?<+] QE,\^:;5ޔg/q7ʗ!,ZʘޫpJ󖤭&S+#|1YF6{ = \o'%2+)1\%yh[]ˠ[5qꃔ-WK<4[,vB|>)o-Ҩ@>5ic*@TvrVsI9y7wrMDNJNNn241'S/THFӉ^)B꽜IA8s쇜 \2:bvR"d./gLɲH$L}ɇ3OQ&hB;\_OZ.E52IgqAВM$腎*͓*\7Wj! ֓[ieF1/JEӐ4' /pljuФ Jz"]'<%C塌 9SIv^7k۳3l檽)T9ᶊtu뢯T6)nPW a $>r*$QHРՔq.P?#}ɳY%v蕙TK?{Hfn#.*#o:.F{(%_nXeud` w<*1Bv˱SO;#EivrqC"Q-z' _#PZ7g`eO dr0ȟ PH#+9ӄl E{췛GahtvOW,1/4Vx=o?я}ݘ}}Y=?w;';1 ?2Fea/GFɡTWGdje,ǭFGE~!`Z{љ o.W:eoW{xlt7z&L^mpXe[ ڀyd4щݍ` 2ۧh;^gxOʋ)6L uuǣOQFDJ&c:LGX`ßG Pu6 38#C28hû4+cv_55^VZu ]D@%Mr%!|f:|Y=L8"ǎ`G\xĥ>΢9rx=zݷhu5 5k1$JjXJ6q3}X->,ǧooߟ}K y'zgp\xgwӀ2o SW#?ƪy@,.XpeN= y]Z}81Qwc`_vIVg5Hf7>E?KDsI;6߇8+8%v>ʺX;4,wS[?x/-سKg{{1rM_d,צ/k1SKaK[^rN􁍒aHLp3y?X]G?uGedV_~]aݯx祔QQM%(ᡍ$o=k"tQgs A?jg{ i4Fy5\SwLWGJN:9ұ$!\B.؄*hIvqD6_:#1EEHu^: *oѭTxhFS4$Q8$ADCt+&cA`8Taa6%/#} aZ\$=YP)b&8 uSArm6AAs#jz⧱/|/sCkQ1l %\7#=wM(y@g4pğm[}%vBFIȡ&}P}Ǜk#$qag_YoQ򥭋dF%WVU2GZ\W}sJF*:5) >\(?9^?8BTj*]nEuJQ,Ff*!%Beяtjq /?yt9o ^]N+=t*8q\-1/P:}G+3%&2|'~q7>~q-lzH7W!INq0 "6+ؔQMFb7%\Y] a~xBg|Dg!`.,ءQ!e,¹*Aë́sK".ǎ|\_GJ`p\*VW#r#8FNcXj"Hr *%қ-3cǟ=Wqd$S9u2\Q$cYCmM΍V C)^Otr KX CqH5By&8.Y.svI;# #|In˅@RFpn4䰉g`#B,XggĂQ&qq#Wq1`dae X[(7l_(8# a83en?g^ 1"SZbkYIV(gn5 %3" F守d^o^x2$ZJ[ /mE˅/9ǰ[*q%UJrQG%=/WkĻϋ2->m1C>{%}],EC]lyD+;x+-4|ZnαN*ƫXG+<߂ZD91!xskt3}ҡt_x(ER%rO6Ix/rwl| RB_է3Vn}{DT_*! Գ n5T% I=CNf;HB*:8ks,D-`l\>qT~be)B+K5k^h.i7$,ok8@I|P+>ƸcyT಑+*Gy1KG$3G{[M `ο?;(_UDJYPKBJ YY0!L 8>a_w_@9MqZxU\dDg=dl+Ap3  `Tb?:]M]$y_=/`;lOy^s~킛堿i戳RRň ́9a|JP083 9 m~妜Dك} .ƥ`չŘ]8[?[bX.ƥ㴋{.&]L,.&]L(>[Puv1lgkN.&]Ls1X&-0HPN 'VR^e.8P*D́02D9@>_X󵋵8["zv1.CŴW_k^~bګ?ԯ]LK9+ &vJn̚.ۍƿ}KŸo׷by‚N,Ȕ1M d8Wyr];J..PQ.uUO D%d v8AĴ6°@Y*GT YP1tR fk{”ɺ8WB%@ BL@ v2~MeoCĢ0+.ՒFA,άAkL^$sw_on̝"~r ,ndfӥ-&[R~x7Ǖ~}#-wvU. ߋ(&&A1̏z?#loJ3m+t_U?\Tه0ĵ9C) kP(g YS+1*<[:[y1߬kX+l1^2 VMnՇ 7 $e;_2^ߋz5[\n167UQ;zc}to_W~Iĥ!~UWܬ׳,2·2شj=]]>IoVA[J/m\ 3Q ;+ tud/4=p7. {{aY٫?Wk9g c݃Xɛ`>vR|-e딴<| cٮ[ (5xid6 wri?eYG4ܬ=7̦ZZMrNrrOg]?2Ơ!U|odm%?W̑1FMR^0uYxӔsc;ގj&VߌvlG)Vh ٣Tzs1*lj$&HԭdQeZt흰M%]}'U"~R();DHMNzH ;X+QhpxtHJ$9t}aPS5.fU5v.P׾eZ GI.O ^%cFI;vsm7w# .͆EYj.&lh_mﱾb>K*anfO IՇS*qm0zD*,L6 xGLc*3P!z>J)#^=|PJ<0>"%eΩW]3ى%snwЏ z=%Sq^Hh6Nf"?[d a,w'^6S8yd; ~ώB'FƎг$%CԖe^V+FljhNV2;u_cg}Џ jdi?ԟGl $/9G80ȹ$6-YN^ZWGyyx/%XHڬBpSŀYKZ6>VL/=T2 la|־/i6g-(+דжg-٬>6ᬃ[565ElM> 6,GR<nPRmbS#UEF*R &DITqOōT]!U2z~Z?-: >CmFQ3Pܦ2Jul^T_p4@9A[ RJD-"&/R?IjNSݒDXDT REZM4ilm JFN@֦; ]Wcaȯ޻/7ӌR*E>k/}lME ët]J(F nL[AilOJ6`Oר Sn %kAKSDbktx rSvlm"`^SRf]bӗ@k#eVSچSȥeJyR!D/C7sjTiJ@Q̥v֡`"5u^N| HQkFTq lp֠rph&@\~ۆTW7R+ O5 o*CMjG4 \Ԫ:DDkRW8nYEnކhܵhOw:Pjr`phZu``Mk-QߺeZkU2FFL4ꑀ^]*E$X1I[ oC v X1ñɹWYq%Ƭ34k))?NڬoWb<k nȤ0[.\,oa| efbq9Õvq{&ZO|4Kj13 z_/jcZG=NļA>>c#[~~bLַ ?y.^2..-|^_[ٲd|r)W[!g΢!<%KQkiZk 9̺˄v\a "+8̈́cyY L,*D:CnTa1hi>ML/v!g΢A<%`&a,IN67jirUF/I]"Yfئ8<#V4ˌ +`v Ġ8F-ۭ;s"Ǩ;-tmr,S+cCmݴA v%J)Ɯb1$A61kݡǜbgcֺicb)Q(.L@)Ɯbq%A49c"R c+ Ř7S9ŘcJoq93s1ǕIb̔ IS9ŘcJAz|1fJ)ƜbQ%Ř)s1G)wycN1昒@=f0)Ɯb1%䎍/b)ƜbQ%<3"ŘS9$H53,cN1渒1SYǜbQ%㦒g )|1f^C>3ZVv1Ǖ4@s1G} r4b%_׷ݫndukM.-'w_~੹M z~ՇWgߺg1`ɵ[-.o:v%Wn)˘*LcM xnyb0e ~*W]P櫫Y[O9!#]o| deyc_a2eWV6,71d1w]Ο#ՓJ / snX,n&owviݛ f7[qWٍ$h>W9Gsw-̕U+{$x/xwlZofX\m~r$-73aWADNmV#75)1 |6 Tx17wfyy=.˿//+Ku6:if7rA,~w_0%W#.`d-aVKk U ~reY^!D0y tW7#>|( (Xpx7o.ޚI:HFcwmmH2F=S[;[q%)e)ʉw}$%ެC7ˌ"A}E( ar/c 2t?o 9(&tƨ/ 9HqaBB٣0Eg ,(9 eٸP\E9,rF2%V AY' lj!,J.TPYH]PYTT:1i(xe2kS26 ^)4d0#(d|vcQ&o"JRm;;X%@gcC\:XJ*;ͱH'ɝfb>ͷH(*!(2'ȳr#M3c&gehTק'.n37Fu~ M7 iZOoo;w34;WS~pC ~vo3{4o:Fq4Z+81a߂~JhxynGoi9zcJF#5 6lY{K1y`s3+5MAqvQasmeL+y7eŨsCu͇5v2xS8v\;xݜ;s.^vrpvաa;moe~㓷G27]3"ub7IibEe|E67cJZbB -nL[sIC0k7)<] rW'y&p /25tm'u8tA-] Α*LHJkڤzm4SĐ&D)l4]O]Bh i>`\.Z̆%ku䩘hx]b/G_R1_S…s"Kjё^vyPWu.4i앓鲮>Mpa1Թ]ߎNOFdlG/^=Fּ8On0:/Q^1^ALat"T2,%m6𮿊Nk@6V 1y0 Ȝ(!kl9HðT4թdWed>A7pY?=Yz`J7#I9kjxaՓ=7" 6  vx{9)مtdk:zklzhX2beA 2(&2H-0Kf&&&+VPo*  )3 $A *3;R0+ts Ww]%OmJ)kK- 5FOVE $4gIe ^.w;lʴqSsia ! !zO5N2,er( 8L>fCO5{_]]$Ml ?LɆ)F-f\sÜ$5(\7S +u Mא`.e h;NWD4kKuHdFCL z Hk&gdֺ@L8d {څb4 Pq+Nb&=Iz.ύ퐴D}?S-U\^|;Z 9Is|KGmX~0[(AsrW\.!@Hx;J3]0aσ1"SMpmww\^?^}r OL~ҕ5%nm²wGQ,G :aߜZv^#ȥn&s䦦( +0C_}izGoV Nc)WoFӧP|=ڕz x3PA^s~ZR?\A^sx`s5yi&m3V2XZ*>@3PD%";,uFe)2N 9D$JG1Y+ZZhjǮ +N#*Q阳f,FP .3@DtdD.6odŅyäyBFIJ\nbHK,H`ߛQh{OW=DYcz5[ë4)imkƕUVj1ha_r06xjw>lhhhLq!-Z7X ڣuŠUOgcXcW[o{YKh붗o\Dd8Ս:=ZZ ڈN3ֱnEXՄno\Dde $^#ˮ?\õpDTRJ=؊}:kV1X Z#8BPB*'Xd`](NKQG~ IcmsYd(wfj*KҟQP]OS;,鿿Hh@/W0mLy: ,ioM[a O= I_ ?5"$.SD(+[nTPjj"A `T({UY/oKj_?B] N Wktw$nn*jB~xyVTXo}iU!4 KBLWIuŗ E= /! JWȔOݻޕV٨mͻB6Oc|Iчi_vՊ+qdcޛd}W-e`R f)c2u-'lGS|P|fK|>ݝqQ ]sVl{GoKr-e#9;%_DmZ5%~ %6&_߯E*VߛOYW~:cZ{7>Qv+!2e8+pVᬩ a=g=GqGrF:v.ʄAr!&**0ZxBͻ1^/;nexD^vpn(_ ֣4܏l)Hҡ=lRjc0"L+@b)feHK6pF:D/H- 9澰Ii-U٘RWZ!j?W 1酶Xz]6)E?};? LH 8.gr4#b\p)go=̶6|^'tKuOiah¦D%z%xˣQhsB猰Em)6*_ 3V6gFwyӧZùĭu2IRT}} bg9DȱqN6m5|Jnwg _gO;g?U~PuTNUktѺꝳp06r [6AcPu90+vϯS7pkr#=N쌴hIrDUx/n[^$6$Q3^)!ڲIK̬V1hcЖIo%O&~\X-(IL(|Tď\ _Dc%bK|_G^;AtgKM ]5ZUo3[<_P-N)7B!SeP޵ Isɻ`]"*uAJp,P6/mhuh-v!^z[qz3_dC̝0d7ˑ5 .">:qPѮcXG>HG;dI.ƹw΃BЍgHR*s/xZHC#6[Ѱf·Ne_nҦ7sǗ?ʪ;>~Wٲd=2Έz &EZ!#I^0䘵Vd=ĸ_UۤLa I8ۡ%%JF܎H<~Fޖ@{S ]-r!as!t% ' b|Zk}1h?uj8~1!, (`GȣҲX9S\D^9ZwIO0h%QJշHҌ+̉mVrNId֧,ROHSenWHn(8X@YMήBUj?W!.kYRY~ZAh\9<uSY@?U"`tLperH9I :"Xtm߭Yqsg͝7wtsu1X II'l&<#2/Xsb *31Ծu]탪Z>|N:*f>m9o$D|AZ@*[63MPAXnpaJ=՜y{MUV;?qt5L F za\7†,q<ĎR/(BhCB<(SYrZ:)\=^G SU}-5-|l*0}\ۀdPp}\7}\Oa4: Qj tCW)ٷ69H[;|vٷ< [tG&,ob*Bͦr SiDyjoh*?l״%tyLWH2x9WkapL 5->4IO.kb0k:ޣ&gMۧygM(c.9b`U՛C@%cÖRo49Zr7ya;NkkӦl겈Y2RLEG2qZ^&, vvQKա`P)tƒ9R1 (߮,F%fnfЙr.5~s|NqfKRPH7δ,:xgQ;\fwkn,2!ZXpaTKk6䷜w]YjcAQc@' RT󦵥F~*;b,3ICN-Er)Dj1V]rwqnsGO]Zݜݹ,W4w }^=,}ӥϘvL/I?)L~v2BK6Ϋ̦0B`&_'%+:Wz Do)Oӌ0M_r ;,3r6^K;')Hۀ$Bo(۝PuII`VgS|ȳgM4ېuy#H =ԝh2I"_W:H&ױdo"UxQ.2\DepQWzMViuk[2GBT;"ȸ4yC&6Lr܁o~kũ=_)&Xєq+JPbv&;]Nm)|#$_Uzb8a Q**HV g>V7X` N\6$u|nUA6[ݹM@CzEF*X?kr栄98 b6Pc=qn4/օ \gMX:N`}/W~"OWasM5š*R11hnUz(4B P_z)c &!ռ_{7*st|ȡ{[wu}BIؔykQ=Ǔ@4lmV28j29F !=Z1q xɀahp(r+*K(,UBUJWŗFyq) Rs)#h&0^oJ&} c42}? 5"]%(_ }[7)9E9ڝKedD=p4ivv7}کx |\A wSr• 0ZGbz<[P0a7!t㾆4o!7un}rC"p9Nb/:^][I { XDPl r1QW@˹e"hJk H혓V+ZQ"A8KrYT\&,q*iA(kFmYff|WT"Duc [p2DU'u,r\sYipYj4j8֊oFزy+l:z5Eͪi{Z]7[,? v5?;@tYon^"fŒ~ CH~CfEߝMK-M\ۼ6=ki jP~.oYsg zf[݌[L( B5+iʠ7B4υ()-e@ >g&j)#ն?5P9^}oZ\ځ1N:'ݩ. 惣d T=cj:0MN٦-$K0CPh4 Rޫgwr4>Lt@" kwlX[~Z7*1 k;7\3ˇxL5/C~kW[ON 1D6?Qyh[.~~puv^k Q {|y_B+#ʹShxmF^jqћa<=+@H}Ov}_&gߣ7tqókY|30ޚӵ.Cȷڨ2rC ןmҿ^JV;Uk-&VRANn*nK#V HPʕl՗f9Yڱ+A#~^m2~L҆_Fu978ɋpI\E>bOPV7hJ>e3~0~߀o9\/_.k(p)PxaC0B Z2ZLK_73(Rb=qѪԍxʂ# n:Q3`VOi< MՇʹ`fF Ĩ-zދ)1!ۨ\iUqS2GGzzVP!ÿ]:"Eۋ(uw騈VY|<%FP+b>6hK"3b\3Z atե#w pLSy* A4"]KJ["r,V%NYB 4~@9Jd(Ks?ZMاJ0K4+*>Jil)Sk`Cy  bޑ6\ƯckV奰2W\XI2'וRxNӲog~q7ax`eJeJ(y9*{Qgv~́ԬVŪ{ʶ 0+YbZ.R uBH#N[Ps8fXUJ%Vs*f hXIϽ>uc-K@o;~CA81fJ`BϯQ8SY WRBblT!7ozKvjCyHKA3$*xAǦmrWbo>aPpo狸IcRD̏_{\ly||خK7l|=[(J)??=v%%]m^_nps B]'q_;ӴG~z;W%Z|ߕi#,CX(54;Sj+A-*:J*Ht]3 ^BqL~o))ƨ>?^@+8v{P *TC!c f1)i%G%PsH!#Ab4dRڦw j8~)G PeSy.ЂM-UގfU 2u Trֻ.II4r}]B۬HL[h1IFrF0@I8>QۦwrT% P&=mݢY j0^R3_ 3hPDIQ-Po2`DsbS H!vq`}9TAg6Z1t<BԄDŽY0`KN8r5"F0#a?DE( GL[&؀n"̶0_d>靜B+0FK/y+B^Q1X餵ׄҭJW^z8 J(Uݶ`DG1I8fMf"Phɺ&ovC &r1 !Qӓ+4YpdMF09ht1k4MPe /mx!Ɏ&;A~t?-OW.eu*p.>xfp}h FէIԫ%[gM^>_}Ff/͠5a;L 3ܮc SlIFw7'=:ˢz뛬c1/-ϟ%)k˫LQ =+w>7YǖϚQZ?uY\q "<ϦQ,QJ]`6  J(J. JiZtX):|2.=Pb X8 Q6)cqU$Pkmq}u?*0i@A>!})000Ÿny ^O]'8WL< $aPt}s_lұq$`Ln^}Xnn8 $#vy=iaD`__>-< 4 SM_-3i\{)>~2 ix~iI7MNWc8OZ`c&ϛI8NCIGpw7WAzy4/_guϞ|3IE !'d z7/_&cyBq^:Ňn^ x^rl t5/v#gy1ʏ] s<+Tvk* ˲,eZp)` L_BwLWOպD{;7Dlm]=<ŽS !{ bx5dKv5zN!Z( }wBPD07x{;$v@h/C<ǺKyxJdpa?^p3[EEĻXBc%y(Ab@SaS񦆍 #㉷G(~-1ܡ '(W&`0Q-{+ [ȈrD+GRg"BXÄ뻷`qT R[QtgB_Q(tj=bmꅁvʝTDuX>X~4 m[1@u i5}y8'g]K*apPU4ɢsʐBro[kA>\1@b ? ܀b %"a[Tfa¸MF:|QWDxqpl;?S D?X6dFe(@$?QH Y?d9OHG5f 3/* , x(Hu$B"@e^)VJ{My!HiD{LWmZ`mOAԐ8hyo`TTB\ >SGqe$`s&-e|IE~="xo5QdžV؝-|MEt47A!_>X#9(FOI}s b4ƲG_,Gh4=k}c^ag|=y@@#O->Ɏp,0iw3 Q`{eCD`#QOnHm'^Q}?BJ9I #Q!Q!8h+=׭gLq24L G"/BL}E\X`NA"P__bL.9Egzc՟u纮.)9(Ƙ((Q TI*4@~v.2} U H:@>Maښi+$mA z`zLЃҘR Lx}}ЙuTjz,I`B( Z!lX Z[Jy4"!չ' :,\_ )`kC@עX_P6@ __ ٌ{JGrfܚjA>oڤH~=1/b  (C:K!UV&m& >Lu6^~i{J' ~6sG >Cl l׷|bG1S e|E$^?Cl +2Ouv~KO%R'đ1~>3xP$=EF-"n#>~؝4iw@F:æą N{Oq"PҡdaEf".LUx ϏsD|~< :O*R t3öƽnF2т&d8} ` ZwcX^߲:dF8{(~; uǺh'oo*xu)UFf ~x=kV]WOHxckY?؂(f>?O&̕>[?W埕ZIn9ɺaAL;1y跠W{NyBd{6qWMv-3^uLcF_VnW Ev NeG_ @Sϟ]7{f?Ǘ=HVD1}f//>sUR$*E1(:E9S nūt!' n:ʸ켕=o^Zv8PH?dY]L#0HU%;cF"sޢY{x3Un7~fZ}I掩 ^w;'ôæ:aF* bYjnNyryjK -Ov ETjBAm85΀7Ŭ,s7P%ᬓnW5tV>`T[CnT]\H (sXO{-YYz {u+KQsY0N$L<)1t~onS 3fb8%u0nZn$Y7RQ2}YvyvqHLfwpr~:N\I751 csn $wr:SMRT2Z۪MaW*4gL,]H\zxCzm]!wyjrYHVZO^ԺjUqq,JRT#ө<+*Y"S흥zf7(ff#jQgڽTe?gjΚ^ҚBpy [Sr?Sn@j׫tϪ†1;9[u7b6Z+^ed79qzװ*3_̂vftq75 n7I!ٓ^ݰuhUR)k)*Oնh352Lu=mi檮5*a1]ɤ;k{5dk*zKq+Hͩ\KAo:7>MItPkiqeڃ1e=_VR2iPtDtv>%*vGZґzܷ3fڻ$Լp\?]ϚRhcxܥsp\hHӉtz#1g>KeY6{a4**-LنH*v5;lE[uZg$(p{ivHS𒞗]{xo{B+;)zu5]]4F\low㳻 5͛O]Hڽo[uk}&p ;G퀻P RTؒT*=h1RkAT)ڠ*L!%Sw(M&蒆U"%2H4pޅ.TP a> ˈDEؤ{8 *BVC=RS|WA ?Γ<{U.ؽh:TS_)e١2cl =0pAء!rglWX+§[/vN{?{J~~|5Ȩq`JK#/BמG G㑌{Y:⍰>DDuއQ0 gQҊ`9ŽAB@^ގ)n4Kc0nBG{ەhq5p _1V]|r ]\w n#yql^snY+Hcz凐^[&T| ZB %rXa#w|3G-a݂5. kN~A :k•ɾ˾ޓ9 9܏Kݑvl׎y1y' _үyQ린QtjHiُd{KևDНιtү;{6v|)\Vy0-;V*b*j`%W4URs]nqNQժҢ|Wmnw0h}:P߁jVQotם^WQ0L"!DV[WQ_gR[ƕbaq~%`,UOcV1!ZG{BhƷ'kY}˸[‘vTusHՙM <Ďj)ٞ)'CL0yߍLn^_7}`Ji^/ Sxia㪎9X%CV5D:;\5[D5{ʑmڭNY$Z_ĊObv\* LHæḪu:? i1&+r&YtѹjDf%Ew{A%d m+(K٭?s:@]WA(u'sKy|H)oC`4T|йv1`?rj&DAkާd{$^56 jU;W- #t=uIlzCM4Mwgvm6GW`=Ox͟RL}H@UkE[yR=#x7Y3Ogm#k1]6X>xhy1XJbCQ-c3Z&z{B\/G9$rHp+~&>Vn.}rn&c;Qm/=oY3O+"< b_"kC0N I3 Ӭrg y(N)$gg1n7Sp45AtLq柳Ŝg q$Iul!I/ǝ)$8(9KuP欱فcin FX|Vm*?u)|8ُmḧY2kDD׍(SDgI4MH4-0>Aq`7m->9 >aN|4 SOE~]3ܶT{p,մ$"0\$>E0ED=jL4%EکhY<iCUlua7|u9X]!M7(lqUs>6`̿O]MQQ$ A0[0@ja.L TT01"A*P:]+JU!TbtzÂ*Z ͢eYV]e֣<UCrпg>:|i$CcpiBn:]*;+jBb2>{`^_e<Tb^9]W 2uРˢrviURDߕ%I/Ym=6$3ԪJH_(7*8@ |RxJ%$Z`6Ŷg6s8} ٺczL7tWc5Pc`"2G3`6(^.I)~fc+wS 4O3.DaW wH+[!NűU.~d7+4ҿ&v03LCМhݛ:=CEe&jw}baK2:XF%h Y^i`w)pe$DI^KVfD olKi&@oDGU0`oJfG5T]{=UW.A7O%%|Ky4IKyv:bt;˞bB_>^XTt{Kt:e%haOg=>pYݞJH.byvjj.Yx feFAcr&t,x#^LřćdcޔY/:sf䈿Y/gLЙ.Y/Lw6 w6N^rN'/#:Fg:qޞ!{AbbL[AZ ϼ$jYм4me u,!13qn'6m/1ȣ8M&S;72 ơ)8bJ$i}t A<{v£S.1O>l&X(È(Yj7UKLS`SQ}08e<0X) KQ@g̬tzĞbj?H` Kl3se;)DC3wHs{t]oڧ6ȦkH2^S |^}U7s=G~@j2+W؁g3D u`:J%0#gSL9 oz:v=b2"0E&NO@11T!+1Bo &<ApUC+1xF.T OC%ʩDFEӉ2Rr3kw<ẍ́2t|zr!KTy oN3Oo`)9.W|*EI}z\L2rI/p%Y||>fXS7g|>fS[rCh:2+ZX**xM{W\7T1 jX>nF}@%SM 5T pqe_Qwk `\-1JuMSWo+ʯa R n. iPUtʵ LJtL@qs{>AU[ UUJi}g}3q l7I)ri5-bFJD8'q\2WDM㘅p΋r%Up4F"mbqh;>]tNǃwmIekjUR$.+N$ Ejy8ѺKe$3473H Tߍn%lj(thZoHQ\+hܣ*X1AT{*yA&ؔ%(ܮbjr} c7٘5=*j޶ys\(á\:ڴpP)rHtoɇI-"%"ҦCTGGPG?i3IM*ħMY]XL X.EP6![Nk§;V5צZe){U|\ vXH1kϵ1^I\$:vUc11])\x~Q\u6;!+حB(*Wf⥲ D8# -9~FٶF"6ZD I"FTEȘ$J ܢ[FEbwֱR"kddf]jxLDZ -C#E3H`z GH")TlM# B.D/te 00a?v=L'wuL*+L'^4>IxS~&&MzI.$ 1 An(oL%k\]d\@o+ޟ`"ғ{ ћ~ZAQo9%RR Fy޿e@ )瑶6Iݧ~H0Qదh TTscnkv4)'I2n\ ŮLc`I 50) 43g(wzѧ.\h:gPa'F$ຨVs!vB}`\Y_Ɵ zEM3J)ihfƤkSU@$ UgU[h*bLkNյnnP P(\V NwLK~Ynmr+w9Y̌-~ur~PE#,;üIF&?lo:uf{77HKB 0%c 5 BHYш qjtplyܤx.ӓZ5>(fb+ŝcVȥVJ?TťX B,#R\4j/\$ZҨŕ fKl[]N`쳊yl 9#Xx82619wv>v[pqQz"k维u(ggZ-g/XM޼Q`ԅW`7N裡rA09E˝ڭ̔'Z\cx!Fw;'kN!ybAtqk|U0DU rρAA>HAc ʩu#ZɜyO~q\ρAe $<>džPͯD<@7Qܩd${rxG[H=pEBYX2Ǽ܅G1'z$A!j{Xnqp&sgz`?`  40GUdvQ,sp>iRD1fl݉#OՖtꑼ"4TJTyuͧVOX^N`uN=qDkSۈk=7%VUTZGޒ[zT5ZAE.ZeE1QK֪QUT)EګjڞMsS-&{`2EzX뀣}WJ܋Mނ\R*/rW'w㞊\Y5q㞊̕[hoFI'Re9 ~%6#Kh\Z>%¹jkRsVJ@b$w ͣax]م^z@m)z~@/=:EPhC٫v 6w5"%+Jo/y,᜽xHrY|Rpt f(6z%f'۸POC0gJQ$\lZJdT6I?֔ilq<ۤV0VcrIgGʔIqP)f#.F8-(P5UM.C&w\#|yN;8m";+$i;)S#oǒG aS NXu 2h~sP}:yŚkcűZp^P ty=BM.RCN{,9+4cM):w) f9JZ:_<dȴ q еrU6ЮQ~Ehcc3+F+9X1_d9FT-Y/1EkהE;lÊcB`_|մ,SJQ+#mm:ݧers[9(1_a'֢$"[D4UdCmHQ؈$I|QĬi52%2R\(!N2HJ- ls  ,Ĉ x9E*bEG|^.Ӈ[G2qnX qcͮifܛ8%ɑBf_]3RT-}Ӝ2GF?kI/I˚ vq.9CńBAh8`|p+JVH"6%bq7{6˭+h|눶؃6r.6KN; %gT&y1%KL1f> Bشs{hTeJ}T<}c%1F[0|& FBD,F&L0|Vͅ?{FyOS2kh[[X3 -)4'`};*5x7ᇇFwȒBˁ;{wRS:+&Nn$oo0(B^3hլK۔2hW-F ofRT]tk/8Kkx Iαm-roIRe,B{{[=ep݇h؝őWrv6`c5lc ƭ ?My*0\O~ҤGKnh.RyY,Ғ]=9:M9ۘgO_@rv] r._z͢E>ͰlOpgz xz nxl3V5pg٘z v+bA08KYD]63GB 'oƃBoyo~QIoVI~i"Gd 7÷8fy;]}̲+'{ʷo+_sFQ]:$BME<['/1AfF%RiFkU/FKHjSa~behΖ[A+s{w嗷L׊<3uV}F8405 AMAh&g1iafwJ$^^ dޜ0ڞ]ޑL 7n+tvOdKch,,7paG$BQյe. Ǥy ֘}Ǐ 7`u)+k:v%խzXܠR++{ܘ8qj^t,S > +@M)oZ,)uVuFTZf%1u/N\ VP}cs&ܱCf?cv4am Л_]?0߸m_۾:}u\r2w!!(B8 0D(Q=K WPGwe>ebw<,բ,ɍT}yfrbY&͖oN_IN'y /k<:/ԊpM+{y۬y/97aD2Ə<Bb0xΨB9 T^gYRQ+aKɵ.Q[;ȖyЪ-o#jLʹc2):N`}?c>2D}y$Q\ Lbyw_9_8oFa2RbG#\26z H==|3gHiF'h6 1mIyob%14HF[xdb%1< Jk|yO36MI:*hN'.bsmIpM=݁tW4eN֭9 : \206hlK9'OYQUtwJLk&&]pkLn[QH'˺^MpzL8p`2N/ n MkAlfy? wcn qԌ {/þԪ"O^ 6;H?9p'y~ )wv.ƏUixpI{W9?LȲyRyrtkj:UX%%)d11㑒Z CoVY*lȎi{U7krK$d/QS:2\!z"I1w"_\&ex9\wa&|ܴAM.0҇UgdiH7 yOYͲV| M"HуII3[B !(ʱnv գ2a<`_&O"T@K! MC%(2R%Bpp=,tD'%L@dP TF. `n(HP]\Rq0W+S> :p2OǺX)WDvD~ vqXxƒ]_eQEմ},VEfSWo?ٙj'XtM Kds(|ښY &p U/ 6`BxA "LlHۃv$8i$y#B (Yd<*6`!>}ʜqH$&#Z"紒KKg9o]k$I=iFc bZ 7\auj)8t?Iw9-_'6siDLyݨA|,U qd\m(rmHVp;yWHP -i" <8_#IEѳNX:CP9z ucuBF+#{q!uWrp sJtLeЈr-:La-)l'x@8 uF7KH{+<a?1ybCo0cSOzў Qb_{G-I9E"9b5Hi aAžc>—(C_"OYV.a"6TF*͙I95AH|1FL2AT RyanJj1A*{uNf`{RTf[2S)LR)՚gmw0ɓ]V|ZD[t?nhte=VK۝"y &% ƙzq7q=^)D_,$yl:8Zq8: + d5ޔ~τ)afr;uмu`^LW~AwS>YvT/t[X=B BՔ!pS#}qx) `\M/QBȕƚp$2. \ .QP=̢|g[Έ(̋}nGl>/ipkRtͷ|c\UR|<Oc#i#]쇁˄\3 LkD]<5Wl ̹nWqhc>=>PF8L. #7 5EQᑧUn;ьfW?|D0Mt _4(:F\k0 bDR'n4/h>zzvG_<vi`m\YOO2E.2%#h1EԳ+C+߿lBs2&s0\Hv}$) ~`abcMA5XT0NF uJ6b! q?bػ6n$rW;_v]%]]mm* `ldɡd٭ )iD63I~eA@馺?OUd;! WNԪ D$KZ&IBK7i !6v^yK[;P|z (T Z'-Bp+RߜB]+l0GvWff!}·ǛX#uXɾ@щ{GI%`R`ȴ[ u* I@F&4iN(<| M\Crݱ, bݡn"DD 4R ?hYq?-%lL !Oc%c+>*Ϣ&iɟOk ;,ıdwOAvuV;a V A$;[7 VF E)cFILRiIm'(lHDıdw88CrBJ9ViR?Ui R*HZޏJP |ήs"1OvV-]^e uozI9a|Ü.h7(&2s>^I+o~*X Uc`WE+&B@L;Ph-.Z2t%˚`LM!eߍdow\˩KU>oH9+6!BsI}W`Dٞη+2PiWGh*v\%R~m)a-  l I52B}#O]GCG晅2";-}ufN?xtOKSB!!˸$%OSvt ژhr$O:a#D$SIF:nD[G%g*X$$ Ɖ0JRra !:RZ !o$a$)h/O1`n_~0S׸"It`hb d |zSj,ۦHb; [8 "~tMTr{ʖ\QZ-3_ avS,'9'BkV7충{U#,hjQ-`/F9b}`.8_IJ xYw;qԏ_q+]ampz]ͣ~l)U^,U|cUu]Z/>7 VǍBz;m0Wynf~{~O<급9R72ۛd' >nr})糘CzMCG1&wi|`ү]=Ȏ&NKz_NLJ 6:ZzYUC;|l4Udra$Ͽ8NEG+@>-H{B&+RW Wmɳ7WrW?킋XO]]ߖ =)@#ujs2LmXCろ(4\XU&EЀa|$G0Bu![{ލQQ* ݨYa0Ĉ ee|SeLEo4k3PQkU81t cg-jǚwؙUSgZ3+lw fJR0ˊ,u@.RÉ`h-`o7<"4ʧMϤ^C3YxH a@8ΖU .pų?evYj/204&XOа7JjG_O֏ Vط P-8g֨u+xI,PHQT:bֱn /Mmƭu R^ -I̜c eå29}vo'(bGs()j.SErm mdY B Ox?!5\eQKKVz}5pWWo\߯$b[vc7fnEP.ާ&ʫw"-~ s^f{*[ò*om8ƚ yXt~#P/̑0z(JV mH:NCo--.;PBRJ!x7(rz4~0Ǫ#]8M~կwsn|Rd=V%e./~EׯPg',_|Oⓧ(sq2;T娳5˰;0Ԏ,L5f|2/@1- ȹʝt y/.{>zC78{wD}ً2ڑOF=MYCT>xl|! L0BCT4 7tݩ<ù'p1tT7 ]+/frg"ȷ?Wݪ}ՀкyuEZ򒻭wms+E6[|?ӟ2zmzDRzK}ٷ_j笚WDE5go}ړiMH}0kՂm<\!_FJUv& x+Vp`lEKVpË )t=.b'"rG]^ĜM FO-FX$}X_^,텲ֈ_D' ] qt(&(_X_K`кXLvf[V(kqED8S1oYsfMH[Giiii[4W糫y`hL(1 ;W$rT\"^KF5P8k ^_ϯj^ttg<m,rxͻ^7_4 HR[aD[J0T Fjˠʮ*;t+IeP尃*;% i3INn] )_-F:jJuK1,TUrhl hbҩQ bNNt6=$* [GrWs6^%J>zm0jaKfgp5YU+;ے'rpZ5 55f-: )ڃ3NY kg##fh8P7Sxo՜ ЃVL/=*Akz V[MOujmMPj}S IiTFڐ@&_w"t5=;u O\vqʋ9kIDP^h$Ӛ&DOWj"o<T3B)!S LѩDUG ^A55^gfPjVHjd/?Qj'aoǶEMxF6|׏;eEtcQ8U^' ˊZV!8h)@i\YYˊ (Cb˧e-v3~5 OȑO2|0md> O*aJ)( X"4(PxPtal0򵳠G/M\Sbjs O;kmoh'BL@8^2bX ~c7Jz$BS]4gZ !s%!|cn*m'Y=-mamw,m;7/c8*?^-HN@{.4VQqhG܉|6hms*.^b[\dBrб=e0$Jqћ} WzRxΠ/P S :lWn?{m_r]~QJU(y`$Fewi[>~%q.mrgFגH+ i,mT29MrbNiJn.5eGR dew7oS&zyD\? |l~$[ݬe7 m׷W]pO:pyw;&Iw=S!JrסYsb95\ ~"wcwSTE=-ާ)q{`3euKߒpM))gƑ0 uRۈnhwWtKGoF6<䅻hOyTc]qnF7NFvKA뤶݆Ep薞}B30l`3@%7 ӝ; 9haa&&B P RXU+F,H;1b#T3q "Y#c-#7]DAh ݻwP^^]FWf_۫+L٪b$J0e~m u0zQ[܏ymIi+l͕Zz3 6a܃]-UyUO޾cbqD։-f!IF,(=waI&yp|$p>IwaI$Fک20[ݻ"Uߟ~.uWQ f.vԝWg6)!ba/qWO;}w8F'(5>=qS䷮.>O^1bne_rHc]/:[# FM!dj>U@a3joxL!Lc֒,:~?dkufڻ_r2Zz8l0`1!(퇃TA՞NC5yl,1dWˏwbNM0Lh9Evl#$Beۜ"]W" 3 pIB4Lv ;:GQh;i0PQg(@H hrVdr8V 5Gn78&sW}a5dvV/鬄\:VV b5JhXF.+Ej"VLVv- ؄y)WKI}^uTy)3%R}Rפ ?-~_RǹFuw7o^m^v\OFrNjݿ]6Hv߯h@Q3BBTGw#u:Z@i@l@뚼LZےU!/篸f(çh뛹aSj7E^`MtO ;TiCZؚٱKYa{'3ZpTQvq,P6%wDqA 0]44Ĭ'4k9)N@kb ltc:S\ e1ne` Btls3 qzi`HL3d"Yg XdI s( I9f'A [&1x]'k):$?Kndf(hrE*EVj5EY} 2įhDNRw1vj2D<sc5䎽ד?鹝n)]]_}Vf`*h<7!q!J,K} kiΙd9T 1* XG4YvC< #hXhaEȍeA7 8Hda !X5 +PЌ$)n0I@4X V;(ˉ(E38 "k}hF@vlP>2-I%aYA9>:ЮU$#8>K*n{)”aY Yv]8!kD~ec#,ى}` F6PeF 4%ӌ0eaׁ0{DvpxݾK9j> n ?. 2Ԏ6\$60DNxY( G7Q/7Y{DBXm WRH8|Nne_IminIx-n|%V^[X2Cչ`R+רlquie~uϭi+5/k^8׼pyz~yNnaI\ ɂ`fƾPiqi*!ӹQs&rP5IS\ T F77{rcc[ed.kπtWVIyNoeD"J8j3ܞicL`#9ɋLU +FH$ʸL1P0ܞũgIglFrxGXxNrL&Zx 3[L+==p`%Dt# .JP T3D"b!h aj(sRH2lm[#1B RH,D[q;`fWлByJJ6))NOImu`>Km9%?(/a.cJɐ)XA DڕESsaw5vp eP `d1!J% vHv[;`G"]$oTnpH5NX(T.7Pl̈́&e.}ܨ2$aWuZr&!0N[)~w`QYty[12rmZ|c*r5ֺ~O=a_’sT.eg*1_rEwW]ot1Yuͯ^a3ժWûj{=5c󊳖`XyV=ݻJ(r <_-|qY{w+DݳWI+tS>-Z!zҧS%tP`%wr"!~ 5TfhsEB tF@y9m'gFcƎ\8qJo.5c)StrRv>XSwOർӟ+$ +H${4A(<ߐ'*\  Yo';۶ˡbmNWMhK7./kf*J:S\e_Mo?T{y~khMT6.4\έSuzv7K^93eiC^8֙z#9 1DTNj1m՗/sF6<䅻hOQ F܌nƒRu:mD!wDz']Gt:ֆp )2r+/s(=wݗ;Fݩ7g~QWW٫UӯܑBUB"ڎhSd)$1/<\p_ Phs:!!B)tU]AACsa`3dC=BA"?6ى&AAQ`zv!)Ce8kad 8GvCAn`،@vQ<II<JcB7j1CYP͌xݝ#3Q6PA(R̓v0zw+n%DqEnSm%"bGIcG{]\aG[[ T 9o`TH B"|ֶ"u((7)Rsǻ׎ 圢T:R3kz0jۅv!̃GyE2\MK\fSܖxwn(Ž{! I}^ Dp p$fK1F}RץQY{) %ڛ8}Rץ~K@|%3yTnn/j-'orR_N.)^A8}wۡ8]Kںe󵵖\Wlr"O޾bq< [LLb~sD|K$(^JUFIĦYG{EP0p J `4f/2 h) Z.Ʋ@k7®CbC|0U:6we#H @DCLR]Pri1*9CiFPmT0fDD۩HIXN`rAa2h( ޵6E/w9]]/܌H0ݝ7]ʏ_7%۔DjlE_}~U*EH5~$8 hM7uy&ez !4$ɵE*JR@XiUZpAIsm*eJ! n`YGW脌Fዞo R=C$6eVSeY 7K`9S9 IY *BUQir :Cx_Vo[5GgVZQetH핻c-rF&~4o `o/*|* Ʃ/>BhkFd/n{0f|_dϯZ {y]ܓ^S=h{-݋ft"Q$:is G!D»cl:R|/2A#RDvR4X*L} >Sa:>P胟fh}<6.@ V뎥V+ A*c$ݭh4G{Gc|ݭVLwauSc:ݑxA|FP˦tG{|twWN(Nc݀m,>ԉ}FHtp(%=[ y16Ft#Wۀn7v>;v!kKlS*lZSe|Z66x68kȈ} u#{~,L8Z8 6KDuNKT[VkR:c^:KuYz,Eu-/)QZYz,Ec) KXZ[MJ6KIRQKkIY*Θ21ҧ>߰Z16KX VQ)׭'~dAK?`)l,ҏ8{:lc)5$`)*?VaKԱ1.%RZ 2KZrYXVaO]W+KAZ96KXn&"`)Kk Y귪/㚛Q<{ '4\tyyjOʎLJHYf%7)ZLE1Ӛ\,ǢJy0ʬk6g+`.w_h]y^k>Tijr^7l~8‚].4u*"')6N0ߜyβ_87 9b`\\<^qv// k嬬n]~ml9ͮ_F _}u~ -}f GB "q$~O/׹:G>dbGJx h"oT4wPDl@5Z͗U=x]Ҷ>``?竧rvBsm8؞qI7-'q4S"o}DVhrnKv2L5cQs_ǡU . fk`]NC#7qpOy K\EPPB^3 US Ѫf*YpBYBsPPJI\Eeo 21M}LaK]zC$r6loW+w{n8zofmL3ʐ,VI+49P_Vey.,+ɻIoo/jBKrz@CGބM@@oc( =\8ƫ]AR?WpIօ'v馽A7bwcLÞkXc_ o)hm1&۫ec:fڇVkۑ"W\I梂ԠҜg,iZS^sںC ⰟU]s{h6~n Q("J3T<[Ŋ<%JT aQ: FVOC?'_լuo7_෼lvJWtբһuc9{W}\~f&yI&/ۢBUiU)])3vP&s *8jY KM. DҔ$\mx[x(-,6|{ | kIO e {JeLȂFk6O\@]TfrgrMc&Wי\9G;GeqNK%:/q㗝IAR6ܟY×+ i|vvcS'{='~H!2fgK|GIv##k{NP$({-xoDXym300zX5z1F򿱮<{Nz Ŧݭ6R]ݞXsie€x&iM@tu9m]s 0`w}Ac2\2D4$ot,H|3)c]Ds?$^]?dqz[Z|dq\ߥ;{e]83AVcc2'u޹nS<|@%F+DzڧGjrW4bC'* , \щۓ^0%ڰ˂Ij8ݴL݀fr= '.TFXv=yS:{3,A7gKxf Lt}ٗ¾T^Sƞo@ Mnbe=7a%F]ZS{=:%n!-ڑAN)]4t'7trun ڟA}n3f1ĕWOb<;t*D6i'>~^:{Y(j?"a|ӏ^.F3լ c{!XOBxN.&:ZEP'͌iLN#wKa>Q8I<,(!0ġ'鞧HQݔZGQfB*A+4perW *EV2QUf,e*u&8((l F|OGDlvE( 5$R ) ]d 4#sQ&UEFUY \CFR>܋2aM/FMHHYȊ*R(SKʂ49E:@ L)7Ir,|F J@ZPA-ۃZzoJ}i.t *:6ZdYAyҤ<84VS,@Ks:FS-o;jyJ6?.'f("E7X4LQ.JǙJPШ|El,~)7gJjr"|~y0Ƃ XBvԁ$ [.Cʸ=+s2'h^h^  KGѼnEB}f'ų7|$t$y xhc[`UwƏiբg,8ʱ5?s'0wI{Z&?_LniE8"gF}$JJEcEN2 nXiQ j+8OB!/$ۺ*N+ҼS2+I!Q2Y.3r,*;_EZVN>_Zڑ I~"%>w&!W/nnO'g>#~>9=TOnv|fibD\ X>11 }yC?|> =Y NA,~>_0GjRZ^`mP؋XwnXB׾- PbihO uΏZ;2S= ]?iMuBA+=鼁oo?5aK vn%6 4R5O&c)}]ة R:v >>y6ߧkVYG=_ } \5Gpبx%ytLQQ ͉NcXWտ %%uz@ zmFMt2Q577szà/K`-qIcY2oĮgʄ_dc8Vfb#QsZa!R➪z#ٝHWv3:GQ!cq'j?[d[݂ _nch J0%^E~wF n*Rlnx/ba3 ~l5ڑ ѿV7vX#)1\v>5x!12hF?؈n b1N3Bp;Szn ѭ rpG0Ct:nC"lpEx,C^9E#8eGu.LS(88DpŻv4 bx9575@@N^$rMjG\۩nf^6-~pjO⒢/.)K3)jy]2%v~[KSd:33,Y<@SeiR2/+6k75gw;LO`Y&1Zk߿-{zOqQD?|g* mTaoUCmաcZCuCs:tA[ef\aFfD 1.(cYҎ%E"Й9VR\;alUnzfn, M!0? %\i%q `YIe)r*ÉҜyi(ZZeg  #:0Q!O.ngNt~4"(Jo: 4^&/nVv{6'=9cQNB$u[ qmwG_!%~ r8\8pg3=k#$exJ{8×ĵaaOuQzDL=7 $Њ釅X!8@F7'%ςӨ+$-q:B-"ϭG +(7WQ0DxGfg2o4CsL vbey|O Q`vMX(b/:>7ds?|`-o4d XZکi0 mk؝jQK l?B;'V2cjH۝j YZ2< \n jW`ВI-`кp) y*:S"GSXoq̻"ZIv;}whVdǕi'"*SFXS -QρD1xI5j.&z&**K'YZ /ZzZmGsQK hfo%(^XB sZ ,MKK2gBi1/嵋%MT,Rt>EʪJdsjiBբD,e"Cd!ws l dLw0kɅIFje+aWKGV̛pX&$ۊ?:U%iշL> ` ?l\ {%-%)ݢܻa~:G:B׼Kk{l-4 .%GPvY)lū;1ͫv%u%#2KڟVx2UD0Eסx ժc\+2tm7PZX}ㇼ}Pi73MyUO0y\U׽?a2Z~4)8*F[ǐ?~zS؛,n!Q9$VކO?_>a=sӨ2p߲r߅hZ!DSO/(E*|3& &(^$ʢr>RE+|#&:&EᏪj0'$?^^1īj_:P)a UzB63(ah2}y($8Mꮖ#Hƺ@k\;QrgsX+ZfќwZFflhJ[295ɗ5î0sKRCVjߖs[$68ڠ4ZJ{>Di΂ڹ]:7(r~-Lfu-/C<3YnrhzdKZ~))yZ@Y 50v a8982IZs[(Ya͍”Uad nsZvӱ( ښ`%;O@ 8k@1$q#FzmCpw/}?А>CegZBFZͷT+qI$xb~ҟð$h;_P怂ziٱ=wX.R(ϧ!1V,MlZv~;r9'1z_m.A$"shRI*v QWj$c SW\WƄzuy h Dm#TvbL #??0G}Yޏ[}P}!ekiM\NxVeK^C4 zUtV3 r{[IY@kWk(z.,`kd9n{WV0v8kFZZD҃ݧ+5:6J5 :zH/>x4Wޟ^XɨuT%gwy/]^i%y`2?1&g;vt6mݎf72 rg6d) HT.3Q=diY'Cq6J\]bx}u:CM^>o#Eݴ|gocX<_Bilpʆ xC@ ށ9Ef '` kOGzC&g^(ܘvyuŚĸm[_m t Xf\2pFY:6sˆ)yYxL_wbOtYq;b182F/_$.ԉdꁵ-ZHrJgeayUQ1)d"r>>(;_uq I˕(/@IYQ9X`!!18WBg+tiۈ{rsQ)u[sg%kl,$ U(>݅tXxx7*t7!Vuȋ'(Ux\'}ǐک>SXw l:v=R޾ѽXv{i(m$y[gp"$g,"V^̧l,CyjVl}'1 O/O彩=;Cz!|"{Սd ^GTnH,{aV<\rs)Θ<^$KυT BKw>sք(:: L΅iO_E^XL #`+yIuA ^s7lM7c5{Ҥ?II۶5ݚuJg.YɅ6 R  KSԨ*Sڊ|6kw~h*%j`. '33K"pgDFgHdT:W_L qkՁo {+w~G}?yh"Ow<7aU?,$Z؀KhS6ތj= :qkCB),hFJ1E%9299Z=E]N^hǸh6*n[ޢ=epx[)9*h͜18B 4@)L!%JXbvMýr;gNRu)zoIB+߸ =O^N\LoCr+YT_C_w0P} ?а_"Ll&oJU)/|)%i?;PR)#ef~pRd >0ZOÌ=Xq^+ƋRdFH0F/D@ <3wW JS&}\ =ӿۊ:*,qm#/.9LXtm2M$DOdIN i%$(r̴ AOt;c68淯y N9tpqE ްi%WeYaM[~qif7G'z]^ Rn ylDxi8{[!_>Jp] o"o1J$׵ʽN:@lfLd-'cmtc$Z-nf$ՄUÉWk*ds(.9xRe|u0؈;s'i+xd8MMf?AݸqL+QA;AYk\Ե#1]K'SB-W6pb\>;"2h{Z;OH͐F̠2AЪr2 Vy0\R=|JPgú_+HSeU7ՐlJ3@5{jVJA@R%ezڌE2ok6 P2bdu¾JN {1\L5.#+EbebL;ql=]0my{\p)8{O~wr+L/oh`dJ Pl ̫N0wl<0'xj@zUA%uM!ꅩB1ԏ+4mQ:snY_ i7Aڭ|8k7Y|բv t|(Z"i7[ڭ|)l!DpL %Hѭec,geUu4"]ʪڴ7bPq)>nyZR|JH)Ps)>ՕsO6q)'w p!3ŧ9)#P֞Ѩqudj!T`)2pmej1˂Co_% r2//%SBe>w垆q7K8﫮{U!⵸cTS l؝Pțd˶ `F)H,$Y|q[ꮤ_`d!2q$czG! IIdXc%QLГJlLoW1BZƾ*SM9@.\л nn_b-p&;]f9{7N>xSA6KX:uV9~fiWXNN.!-hrQ^(̎aDq`P'JH8H& 4JB7 PPpF7QW&@ 2 t/\rX}(fݞbeJqcicu!ꅩ7  wA:sn-5LesH[E;j:CSv߇xHH=ysR L5wk@Wʘbqjqn4,}kxz&7|9U8rԫ^ \`&䙤%qIt#8!`8XԜkW|\Crzq,$Ze2os?ı ;][΀/dh#GVj3 q7.Z_'8`7k>%wIVs˲3Xgwio%};qoǃɊ ؚJ =$pEDFH0r%PdVGB2f<"lN}]Yao+3T!`T ҔjHF)n(CL;cTv&G^R;ʛT.3Q'JS=ڥU(=kBR(SDI'Pz.Ք0ܣQV(=Cԭ|N5/v9Qz(% $ĝ@)n(M9];.=a|TvgRHP S_t{:NhjP=7JpC)Cm.C'PJSz(=G9 ХǨPmϢ\vCR?3УQʯSԎ(5SPz.ՆlѣQJ<HRPQ-a/=oiiC@3j(Dt( FD8գ] (W{RwAzK5A?P'>( )նEҳF)n(EiN(E gRȢ3 EsCt(viNaRG}N9T^qv@G{(=Nhj{]z(udH:q/jD3G)Fn(( PBha[-gGM1$ "1o}H3aE !!aOty!tob|օo*ks2 f0K&%l7&x2 aBӘăo =]fI{wA'!$n0Z}ޢI $)Qs̯[6`6N$\rXHY/9(J~g<$69QO)uib5`}bG Z<9^A,Ɯ_o؋>Jd }#ގG6BB1\@r!-E C؁ 2{&cm5J6+#mj6\NƑ|9 #p``GEEEmDl 6O$f@kjNW% Ubr$E$?v5oR\,v;Uw'D[|A ޓ:B1P2٧IԶqL&DRB%!1E" #IL`H1#8O(꽽*b^_N.NtbS-Yc{>}3k9 0.DdgBKN^ {"|㾝ޝAp+oӱi[5>Bd,?$ `Ξ-jq3<(<ÍW^d(жr;ZT3 ~Nwߝ~ߏE}75Y>SKZ9(Qm{cbF,3iSˬSn\8& Wj4v9|BZINClL?hOEd&џƑ~'sfNsv)9 DKIf`Gf!A\{Tx;- Fʫ -BN˝'B֦Hh6fs ?K,BrDֳ_F?EEă=f1+z9ƫ`Hm1 q!0Bʡ}s@ B)1!h\/.?d^S_Z\|e> AJ)Z9X`B&4P(&"JNE9fD@p) 06'zj6  }WŭAF؋Xzʫ`+9os'\?cRzX/t0[zeN">/Uve &*#; C!UwY2gk&C^hhtyY}g@ 6v_r=YePd.Ͷ,,pb;/dYS;*kxa/d i=[<$X5)A[NroE^_L6ȑxfR<06j|!@iFYV=$I=*\CFPyG`f/T 7k.T^Mm^Ǔu{k~0DK6f!L"fL3)Ė7"_pd# ( 44 q3ģ\χGeY -Cηv̰Jre99zHB^DGcT)P JTi@UH"Pk_֫ަ`F5fCL!ŬwjQkC;tr@8َ'vKVM 9J&cf'n҇mV+H Z_Y{KuhVe/o~|uQh'svS ^.t_w6n~}bE (cyNNzJ)F~;& I[\\}+ßgq+c GS@/-boHMQ$ ^RcZbJ-fH7nT>kϩ-|xzރ'זW=oII*Κck@Tv9ŷGQ$9xƘWc>-i ;N3=QuLi pDs Tk5 U/4'ŏz=u#``cƮ{āeXskZz:FCFA1.hnl;B;\;b׋/1  :%'>Ej{ 8{<קqpoR`RN;9!TI V /pl/Llq۞-6&coj|}}JOWKM f}=51 `7)懩7Ff,5@t& Ґl5QAcq/(_^>޽|-v~'lկ۟+.4Sn sHxR/?dS"#ۃoiKR1q>n3{ΦM6 DnCxȍhO!GK8f7dAA9gD \'Nیd۔gmqM)2NjGuIjM~*R`s9 R\dLw?At HtFkeLz#48ʩ3'*uW2Q7 O"84{ܜ{L nM8p^/m3 LoK.)@ꕾV󲷯KhT &zMBI,$Ƞ FĹYde.=<( HM[Ŕ[ Z,_է/|5a+mQtr9^19NB}Kf tN.RN Ugvk jQ&j3Zp#2KCu[Fnqh8KHdq 5ZmJeAnr-\q22Y(+hKuiɂl*:T8*\pU,X qS-Uʐ>Rj%^bH I v}q㰒Z2(3e5I,RehaS C\sCǡ,)-uY+. ˲5dEXk5Zj k0J)oNH}&CY! rqZqh*L@3C!ϭTRPm2cGP Z?KR3Z &SqTDPEZY{٫{}OU53B\MZ݊"qTWRaj!WRO"h. $aUب T-lda#^8AUTIr"@ON@dpB Qu Л}?s~X|_E/O+͏?[#zvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003665507115134023175017713 0ustar rootrootJan 21 00:06:06 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 00:06:06 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:06 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:07 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 00:06:07 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 00:06:07 crc kubenswrapper[4873]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:06:07 crc kubenswrapper[4873]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 00:06:07 crc kubenswrapper[4873]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:06:07 crc kubenswrapper[4873]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:06:07 crc kubenswrapper[4873]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 00:06:07 crc kubenswrapper[4873]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.877758 4873 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880540 4873 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880578 4873 feature_gate.go:330] unrecognized feature gate: Example Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880584 4873 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880589 4873 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880596 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880601 4873 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880608 4873 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880615 4873 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880621 4873 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880626 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880639 4873 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880644 4873 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880649 4873 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880654 4873 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880659 4873 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880663 4873 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880668 4873 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880672 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880675 4873 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880679 4873 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880682 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880686 4873 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880690 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880693 4873 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880697 4873 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880701 4873 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880704 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880708 4873 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880712 4873 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880715 4873 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880719 4873 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880723 4873 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880729 4873 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880735 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880740 4873 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880745 4873 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880749 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880753 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880757 4873 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880761 4873 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880765 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880768 4873 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880771 4873 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880775 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880778 4873 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880782 4873 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880785 4873 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880788 4873 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880792 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880795 4873 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880798 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880802 4873 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880805 4873 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880808 4873 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880812 4873 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880815 4873 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880818 4873 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880824 4873 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880828 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880831 4873 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880834 4873 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880839 4873 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880843 4873 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880848 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880854 4873 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880858 4873 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880865 4873 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880870 4873 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880875 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880880 4873 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.880885 4873 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881246 4873 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881261 4873 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881270 4873 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881275 4873 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881281 4873 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881286 4873 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881292 4873 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881297 4873 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881301 4873 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881305 4873 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881310 4873 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881314 4873 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881318 4873 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881322 4873 flags.go:64] FLAG: --cgroup-root="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881326 4873 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881331 4873 flags.go:64] FLAG: --client-ca-file="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881337 4873 flags.go:64] FLAG: --cloud-config="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881341 4873 flags.go:64] FLAG: --cloud-provider="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881345 4873 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881350 4873 flags.go:64] FLAG: --cluster-domain="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881354 4873 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881358 4873 flags.go:64] FLAG: --config-dir="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881362 4873 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881366 4873 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881373 4873 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881377 4873 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881382 4873 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881388 4873 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881393 4873 flags.go:64] FLAG: --contention-profiling="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881397 4873 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881402 4873 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881407 4873 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881412 4873 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881421 4873 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881427 4873 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881433 4873 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881438 4873 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881444 4873 flags.go:64] FLAG: --enable-server="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881449 4873 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881456 4873 flags.go:64] FLAG: --event-burst="100" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881463 4873 flags.go:64] FLAG: --event-qps="50" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881485 4873 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881490 4873 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881494 4873 flags.go:64] FLAG: --eviction-hard="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881500 4873 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881504 4873 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881509 4873 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881514 4873 flags.go:64] FLAG: --eviction-soft="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881519 4873 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881523 4873 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881528 4873 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881533 4873 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881538 4873 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881543 4873 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881563 4873 flags.go:64] FLAG: --feature-gates="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881574 4873 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881579 4873 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881585 4873 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881590 4873 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881596 4873 flags.go:64] FLAG: --healthz-port="10248" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881601 4873 flags.go:64] FLAG: --help="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881607 4873 flags.go:64] FLAG: --hostname-override="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881611 4873 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881615 4873 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881620 4873 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881624 4873 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881628 4873 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881632 4873 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881636 4873 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881641 4873 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881645 4873 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881649 4873 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881653 4873 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881657 4873 flags.go:64] FLAG: --kube-reserved="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881661 4873 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881665 4873 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881669 4873 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881674 4873 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881678 4873 flags.go:64] FLAG: --lock-file="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881682 4873 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881686 4873 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881691 4873 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881697 4873 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881701 4873 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881705 4873 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881709 4873 flags.go:64] FLAG: --logging-format="text" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881713 4873 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881718 4873 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881722 4873 flags.go:64] FLAG: --manifest-url="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881726 4873 flags.go:64] FLAG: --manifest-url-header="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881732 4873 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881736 4873 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881741 4873 flags.go:64] FLAG: --max-pods="110" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881745 4873 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881750 4873 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881754 4873 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881758 4873 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881763 4873 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881767 4873 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881771 4873 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881782 4873 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881786 4873 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881790 4873 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881795 4873 flags.go:64] FLAG: --pod-cidr="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881799 4873 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881806 4873 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881810 4873 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881814 4873 flags.go:64] FLAG: --pods-per-core="0" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881818 4873 flags.go:64] FLAG: --port="10250" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881822 4873 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881826 4873 flags.go:64] FLAG: --provider-id="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881830 4873 flags.go:64] FLAG: --qos-reserved="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881834 4873 flags.go:64] FLAG: --read-only-port="10255" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881838 4873 flags.go:64] FLAG: --register-node="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881843 4873 flags.go:64] FLAG: --register-schedulable="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881847 4873 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881854 4873 flags.go:64] FLAG: --registry-burst="10" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881858 4873 flags.go:64] FLAG: --registry-qps="5" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881862 4873 flags.go:64] FLAG: --reserved-cpus="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881866 4873 flags.go:64] FLAG: --reserved-memory="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881871 4873 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881875 4873 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881880 4873 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881887 4873 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881891 4873 flags.go:64] FLAG: --runonce="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881895 4873 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881900 4873 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881905 4873 flags.go:64] FLAG: --seccomp-default="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881912 4873 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881916 4873 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881921 4873 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881926 4873 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881930 4873 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881935 4873 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881939 4873 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881943 4873 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881947 4873 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881951 4873 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881955 4873 flags.go:64] FLAG: --system-cgroups="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881959 4873 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881965 4873 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881970 4873 flags.go:64] FLAG: --tls-cert-file="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881974 4873 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881978 4873 flags.go:64] FLAG: --tls-min-version="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881983 4873 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881987 4873 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881991 4873 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881995 4873 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.881999 4873 flags.go:64] FLAG: --v="2" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.882005 4873 flags.go:64] FLAG: --version="false" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.882010 4873 flags.go:64] FLAG: --vmodule="" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.882015 4873 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.882019 4873 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882114 4873 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882119 4873 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882127 4873 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882131 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882135 4873 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882138 4873 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882142 4873 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882147 4873 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882152 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882155 4873 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882159 4873 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882163 4873 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882166 4873 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882170 4873 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882173 4873 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882178 4873 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882182 4873 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882186 4873 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882189 4873 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882194 4873 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882197 4873 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882201 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882204 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882208 4873 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882215 4873 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882218 4873 feature_gate.go:330] unrecognized feature gate: Example Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882222 4873 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882226 4873 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882229 4873 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882232 4873 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882236 4873 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882239 4873 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882243 4873 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882246 4873 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882251 4873 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882254 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882258 4873 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882261 4873 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882264 4873 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882268 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882271 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882275 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882278 4873 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882282 4873 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882286 4873 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882289 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882292 4873 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882296 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882300 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882303 4873 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882306 4873 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882310 4873 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882313 4873 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882318 4873 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882321 4873 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882324 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882329 4873 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882333 4873 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882336 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882340 4873 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882345 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882348 4873 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882352 4873 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882356 4873 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882360 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882365 4873 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882372 4873 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882377 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882384 4873 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882390 4873 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.882395 4873 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.882563 4873 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.896834 4873 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.897056 4873 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897198 4873 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897255 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897304 4873 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897365 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897424 4873 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897472 4873 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897590 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897654 4873 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897699 4873 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897743 4873 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897787 4873 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897829 4873 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897872 4873 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897923 4873 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.897969 4873 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898012 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898054 4873 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898096 4873 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898137 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898186 4873 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898239 4873 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898285 4873 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898332 4873 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898376 4873 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898419 4873 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898462 4873 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898520 4873 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898685 4873 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898741 4873 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898786 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898831 4873 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898880 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898933 4873 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.898981 4873 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899029 4873 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899075 4873 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899121 4873 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899164 4873 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899212 4873 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899261 4873 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899312 4873 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899356 4873 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899399 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899443 4873 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899484 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899527 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899783 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899866 4873 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899919 4873 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.899964 4873 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900007 4873 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900050 4873 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900094 4873 feature_gate.go:330] unrecognized feature gate: Example Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900142 4873 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900186 4873 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900236 4873 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900283 4873 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900327 4873 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900371 4873 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900420 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900466 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900509 4873 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900567 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900622 4873 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900667 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900711 4873 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900761 4873 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900805 4873 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900854 4873 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900903 4873 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.900948 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.900996 4873 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901249 4873 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901313 4873 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901359 4873 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901402 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901443 4873 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901485 4873 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901566 4873 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901617 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901667 4873 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901712 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901754 4873 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901796 4873 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901838 4873 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901880 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901928 4873 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.901979 4873 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902028 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902072 4873 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902116 4873 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902159 4873 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902200 4873 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902250 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902294 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902337 4873 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902379 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902421 4873 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902463 4873 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902504 4873 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902570 4873 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902621 4873 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902664 4873 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902707 4873 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902750 4873 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902802 4873 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902858 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902915 4873 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.902960 4873 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903006 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903050 4873 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903100 4873 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903144 4873 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903189 4873 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903238 4873 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903282 4873 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903325 4873 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903367 4873 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903409 4873 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903451 4873 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903495 4873 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903562 4873 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903620 4873 feature_gate.go:330] unrecognized feature gate: Example Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903664 4873 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903706 4873 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903747 4873 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903788 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903829 4873 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903880 4873 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903933 4873 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.903995 4873 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904048 4873 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904107 4873 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904158 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904203 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904252 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904297 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904339 4873 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904382 4873 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904423 4873 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904465 4873 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904513 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.904588 4873 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.904639 4873 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.904948 4873 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.908005 4873 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.908200 4873 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.908832 4873 server.go:997] "Starting client certificate rotation" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.908923 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.909414 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-26 11:56:52.704739169 +0000 UTC Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.909620 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.915046 4873 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 00:06:07 crc kubenswrapper[4873]: E0121 00:06:07.918046 4873 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.918503 4873 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.927751 4873 log.go:25] "Validated CRI v1 runtime API" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.945103 4873 log.go:25] "Validated CRI v1 image API" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.947245 4873 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.950628 4873 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-00-00-41-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.950697 4873 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.983882 4873 manager.go:217] Machine: {Timestamp:2026-01-21 00:06:07.981165913 +0000 UTC m=+0.221033629 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:5b5b45e7-e6db-419e-86e2-5c78d53566ef BootID:5ea20d25-b537-4725-8df1-c1c72b69bcdb Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:92:d6:ff Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:92:d6:ff Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:14:92:a0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d0:0f:cc Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7d:cf:6b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:59:85:90 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:22:10:d0:ba:a8:89 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ee:c3:36:3c:94:91 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.984302 4873 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.984519 4873 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.985367 4873 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.985686 4873 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.985766 4873 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.986159 4873 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.986181 4873 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.986507 4873 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.986611 4873 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.986985 4873 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.987125 4873 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.988151 4873 kubelet.go:418] "Attempting to sync node with API server" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.988193 4873 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.988242 4873 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.988268 4873 kubelet.go:324] "Adding apiserver pod source" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.988288 4873 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.990646 4873 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.991061 4873 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.991016 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 21 00:06:07 crc kubenswrapper[4873]: W0121 00:06:07.991072 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 21 00:06:07 crc kubenswrapper[4873]: E0121 00:06:07.991209 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 21 00:06:07 crc kubenswrapper[4873]: E0121 00:06:07.991132 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992131 4873 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992650 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992672 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992680 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992687 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992700 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992707 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992713 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992724 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992731 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992738 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992754 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992760 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.992969 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.993394 4873 server.go:1280] "Started kubelet" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.993970 4873 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.993940 4873 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.993952 4873 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.994749 4873 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 00:06:07 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.997228 4873 server.go:460] "Adding debug handlers to kubelet server" Jan 21 00:06:07 crc kubenswrapper[4873]: E0121 00:06:07.996820 4873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c9642f642d0a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:06:07.993344162 +0000 UTC m=+0.233211808,LastTimestamp:2026-01-21 00:06:07.993344162 +0000 UTC m=+0.233211808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.998698 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.998745 4873 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 00:06:07 crc kubenswrapper[4873]: E0121 00:06:07.998948 4873 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.998996 4873 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.999016 4873 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.999024 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:46:12.721653193 +0000 UTC Jan 21 00:06:07 crc kubenswrapper[4873]: I0121 00:06:07.999096 4873 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 00:06:07 crc kubenswrapper[4873]: E0121 00:06:07.999359 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="200ms" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.000102 4873 factory.go:55] Registering systemd factory Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.000138 4873 factory.go:221] Registration of the systemd container factory successfully Jan 21 00:06:08 crc kubenswrapper[4873]: W0121 00:06:08.003856 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.004025 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.004158 4873 factory.go:153] Registering CRI-O factory Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.005060 4873 factory.go:221] Registration of the crio container factory successfully Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.005230 4873 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.005303 4873 factory.go:103] Registering Raw factory Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.005329 4873 manager.go:1196] Started watching for new ooms in manager Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.005968 4873 manager.go:319] Starting recovery of all containers Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020622 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020713 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020742 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020770 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020795 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020819 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020847 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020872 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020900 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020929 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020956 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.020981 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021006 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021033 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021056 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021088 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021113 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021737 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021877 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021910 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021937 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021961 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.021990 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022018 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022046 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022076 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022539 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022616 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022647 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022671 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022697 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022725 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022750 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022776 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022802 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022828 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022853 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022877 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022901 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.022927 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024080 4873 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024145 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024180 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024208 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024235 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024264 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024288 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024863 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024896 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024926 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024952 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024976 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.024999 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025038 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025063 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025092 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025122 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025154 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025198 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025224 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025247 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025271 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025290 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025310 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025331 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025350 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025370 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025391 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025412 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025439 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025463 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025488 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025513 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025539 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025610 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025641 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025669 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025697 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025642 4873 manager.go:324] Recovery completed Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025861 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025950 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.025994 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026025 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026054 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026080 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026112 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026144 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026170 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026203 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026232 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026260 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026288 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026315 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026343 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026369 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026402 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026429 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026456 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026482 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026510 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026543 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026609 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026635 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026662 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026693 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026723 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026762 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026790 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026814 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026837 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026869 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026903 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026931 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026958 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.026985 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027015 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027041 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027067 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027097 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027124 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027151 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027177 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027201 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027228 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027253 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027278 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027305 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027329 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027353 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027378 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027403 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027423 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027445 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027466 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027485 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027504 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027524 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027542 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027625 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027648 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027667 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027687 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027705 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027725 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027745 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027763 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027782 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027800 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027818 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027841 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027867 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027891 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027916 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027936 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027955 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.027981 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028005 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028028 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028050 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028069 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028086 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028103 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028121 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028138 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028161 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028184 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028211 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028235 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028258 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028278 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028295 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028313 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028332 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028352 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028372 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028391 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028410 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028429 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028446 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028464 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028482 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028499 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028518 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028536 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028595 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028621 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028644 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028671 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028696 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028720 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028743 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028769 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028793 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028817 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028843 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028861 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028879 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028899 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028920 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028945 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.028973 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.029000 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.029028 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.029054 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.029079 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.029104 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.029131 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.029200 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.029218 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.029236 4873 reconstruct.go:97] "Volume reconstruction finished" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.029250 4873 reconciler.go:26] "Reconciler: start to sync state" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.035082 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.037510 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.037610 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.037634 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.039496 4873 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.039518 4873 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.039540 4873 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.055286 4873 policy_none.go:49] "None policy: Start" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.057262 4873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.057660 4873 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.057688 4873 state_mem.go:35] "Initializing new in-memory state store" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.062138 4873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.062207 4873 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.062234 4873 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.062314 4873 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 00:06:08 crc kubenswrapper[4873]: W0121 00:06:08.065034 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.065204 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.099078 4873 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.132884 4873 manager.go:334] "Starting Device Plugin manager" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.134308 4873 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.134353 4873 server.go:79] "Starting device plugin registration server" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.134855 4873 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.134873 4873 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.135066 4873 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.135150 4873 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.135165 4873 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.142215 4873 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.163118 4873 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.163202 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.164208 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.164266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.164287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.164511 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.164751 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.164792 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.165738 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.165765 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.165777 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.165816 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.165853 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.165867 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.165883 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.165954 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.165995 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.166827 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.166864 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.166879 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.166861 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.166966 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.166977 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.167054 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.167227 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.167302 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.167779 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.167867 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.167887 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.168025 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.168229 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.168282 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.168403 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.168438 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.168451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.169077 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.169105 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.169118 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.169303 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.169335 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.169407 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.169432 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.169442 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.170309 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.170331 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.170343 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.200384 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="400ms" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231329 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231389 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231428 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231463 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231496 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231526 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231582 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231612 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231642 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231676 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231727 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231793 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231861 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.231893 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.232051 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.236097 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.238127 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.238191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.238211 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.238293 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.238894 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.333801 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.333886 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.333918 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.333952 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.333985 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334015 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334047 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334079 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334102 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334144 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334109 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334169 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334209 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334110 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334109 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334319 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334169 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334318 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334525 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334197 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334537 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334357 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334623 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334668 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334745 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334795 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334796 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334827 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334888 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.334884 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.439237 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.440687 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.440734 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.440751 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.440783 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.441265 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.508333 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.515536 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.542281 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: W0121 00:06:08.544649 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9fa7e3978ef7b9f86192ac4f3b6c8575bbf9d5a0e184b1521f1040249fc5eecd WatchSource:0}: Error finding container 9fa7e3978ef7b9f86192ac4f3b6c8575bbf9d5a0e184b1521f1040249fc5eecd: Status 404 returned error can't find the container with id 9fa7e3978ef7b9f86192ac4f3b6c8575bbf9d5a0e184b1521f1040249fc5eecd Jan 21 00:06:08 crc kubenswrapper[4873]: W0121 00:06:08.549182 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c1e73c6c47f6d122cf683934a5485b852aa349582a9cfd599a8d839f24e8f74f WatchSource:0}: Error finding container c1e73c6c47f6d122cf683934a5485b852aa349582a9cfd599a8d839f24e8f74f: Status 404 returned error can't find the container with id c1e73c6c47f6d122cf683934a5485b852aa349582a9cfd599a8d839f24e8f74f Jan 21 00:06:08 crc kubenswrapper[4873]: W0121 00:06:08.563569 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-130018e99a9d15e32347f836efb9782ec3d7ea5b8e3f4d14b94e3bfec9482710 WatchSource:0}: Error finding container 130018e99a9d15e32347f836efb9782ec3d7ea5b8e3f4d14b94e3bfec9482710: Status 404 returned error can't find the container with id 130018e99a9d15e32347f836efb9782ec3d7ea5b8e3f4d14b94e3bfec9482710 Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.566389 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.579658 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:08 crc kubenswrapper[4873]: W0121 00:06:08.590754 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-63d1c9cad6907411954c1826d48eb50684cd880b1b00f3da8693665f5ac67e95 WatchSource:0}: Error finding container 63d1c9cad6907411954c1826d48eb50684cd880b1b00f3da8693665f5ac67e95: Status 404 returned error can't find the container with id 63d1c9cad6907411954c1826d48eb50684cd880b1b00f3da8693665f5ac67e95 Jan 21 00:06:08 crc kubenswrapper[4873]: W0121 00:06:08.600282 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-7fe272ddf94751af72dd5af7c3ea278a8dd73f427199f7efa68bd78ed8a6d62f WatchSource:0}: Error finding container 7fe272ddf94751af72dd5af7c3ea278a8dd73f427199f7efa68bd78ed8a6d62f: Status 404 returned error can't find the container with id 7fe272ddf94751af72dd5af7c3ea278a8dd73f427199f7efa68bd78ed8a6d62f Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.601412 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="800ms" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.841454 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.842750 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.842814 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.842831 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.842866 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.843431 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Jan 21 00:06:08 crc kubenswrapper[4873]: W0121 00:06:08.925321 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.925695 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 21 00:06:08 crc kubenswrapper[4873]: W0121 00:06:08.938507 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 21 00:06:08 crc kubenswrapper[4873]: E0121 00:06:08.938606 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.995404 4873 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 21 00:06:08 crc kubenswrapper[4873]: I0121 00:06:08.999614 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:46:41.128344792 +0000 UTC Jan 21 00:06:09 crc kubenswrapper[4873]: W0121 00:06:09.048714 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 21 00:06:09 crc kubenswrapper[4873]: E0121 00:06:09.048796 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.069592 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c"} Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.069682 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c1e73c6c47f6d122cf683934a5485b852aa349582a9cfd599a8d839f24e8f74f"} Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.071037 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309" exitCode=0 Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.071080 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309"} Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.071105 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7fe272ddf94751af72dd5af7c3ea278a8dd73f427199f7efa68bd78ed8a6d62f"} Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.071191 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.072191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.072216 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.072224 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.072952 4873 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389" exitCode=0 Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.073010 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389"} Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.073028 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"63d1c9cad6907411954c1826d48eb50684cd880b1b00f3da8693665f5ac67e95"} Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.073093 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.073728 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.073813 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.073830 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.073777 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.074423 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.074465 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.074477 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.074845 4873 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b" exitCode=0 Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.074888 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b"} Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.074903 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"130018e99a9d15e32347f836efb9782ec3d7ea5b8e3f4d14b94e3bfec9482710"} Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.074952 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.075643 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.075667 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.075678 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.076479 4873 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b" exitCode=0 Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.076502 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b"} Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.076515 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9fa7e3978ef7b9f86192ac4f3b6c8575bbf9d5a0e184b1521f1040249fc5eecd"} Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.076605 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.077272 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.077307 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.077324 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:09 crc kubenswrapper[4873]: W0121 00:06:09.093967 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Jan 21 00:06:09 crc kubenswrapper[4873]: E0121 00:06:09.094056 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Jan 21 00:06:09 crc kubenswrapper[4873]: E0121 00:06:09.402261 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="1.6s" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.643747 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.645679 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.645737 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.645748 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.645770 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 00:06:09 crc kubenswrapper[4873]: E0121 00:06:09.647050 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Jan 21 00:06:09 crc kubenswrapper[4873]: I0121 00:06:09.964321 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.000266 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:14:03.914377388 +0000 UTC Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.083409 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.083459 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.083475 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.083488 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.083502 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.083598 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.084513 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.084542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.084573 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.085603 4873 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd" exitCode=0 Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.085646 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.085887 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.087339 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.087373 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.087385 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.089136 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"01676aea70c76bbd3f85ef64bf5dd1bcf7b54c811fa56187a74b3cbf811436f0"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.089209 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.089776 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.089802 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.089811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.095387 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"19802a3be018fd2df97fe3ec6fd6082f5ea346ae143dc87787c16c16b2dcb0da"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.095427 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"96d073996d12906cde4ab8832502f3f018ad7d2e077d136ac642624119c4f770"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.095440 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"56b93378e6885106059024e384222a61e278c97aca13a5a52dec9288e75b5c79"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.095588 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.096864 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.096903 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.096919 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.099907 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.099945 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.099963 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3"} Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.099998 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.100824 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.100844 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:10 crc kubenswrapper[4873]: I0121 00:06:10.100852 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.000370 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 17:30:16.877150036 +0000 UTC Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.108031 4873 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4" exitCode=0 Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.108183 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4"} Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.108240 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.108260 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.108355 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.108353 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.108200 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.108527 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110127 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110187 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110211 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110136 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110361 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110382 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110399 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110427 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110446 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110376 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110629 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.110653 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.120884 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.247667 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.249515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.249634 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.249661 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.249716 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.978897 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:11 crc kubenswrapper[4873]: I0121 00:06:11.984801 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.001132 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:01:25.955899605 +0000 UTC Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.115747 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3"} Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.115792 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.115820 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68"} Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.115846 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53"} Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.115899 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.117021 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.117076 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.117084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.117100 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.117106 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.117119 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.426935 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.860655 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.860918 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.862748 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.862875 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:12 crc kubenswrapper[4873]: I0121 00:06:12.862904 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.001545 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:43:09.649702417 +0000 UTC Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.124991 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7"} Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.125048 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.125069 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2"} Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.125036 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.126255 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.126298 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.126315 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.126482 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.126525 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.126541 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.348344 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.348731 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.350432 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.350488 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.350504 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:13 crc kubenswrapper[4873]: I0121 00:06:13.915630 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.002493 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:47:47.70670741 +0000 UTC Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.121620 4873 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.121738 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.127778 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.127822 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.127844 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.128914 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.128953 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.129014 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.129160 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.129172 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.129184 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.129189 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.129197 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:14 crc kubenswrapper[4873]: I0121 00:06:14.129235 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:15 crc kubenswrapper[4873]: I0121 00:06:15.003175 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:02:31.895658334 +0000 UTC Jan 21 00:06:16 crc kubenswrapper[4873]: I0121 00:06:16.004037 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:33:18.515823594 +0000 UTC Jan 21 00:06:16 crc kubenswrapper[4873]: I0121 00:06:16.374058 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 00:06:16 crc kubenswrapper[4873]: I0121 00:06:16.374370 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:16 crc kubenswrapper[4873]: I0121 00:06:16.376201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:16 crc kubenswrapper[4873]: I0121 00:06:16.376243 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:16 crc kubenswrapper[4873]: I0121 00:06:16.376254 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:16 crc kubenswrapper[4873]: I0121 00:06:16.944711 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 00:06:17 crc kubenswrapper[4873]: I0121 00:06:17.004760 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:34:22.825142583 +0000 UTC Jan 21 00:06:17 crc kubenswrapper[4873]: I0121 00:06:17.138043 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:17 crc kubenswrapper[4873]: I0121 00:06:17.139589 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:17 crc kubenswrapper[4873]: I0121 00:06:17.139715 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:17 crc kubenswrapper[4873]: I0121 00:06:17.139741 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:17 crc kubenswrapper[4873]: I0121 00:06:17.653884 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:17 crc kubenswrapper[4873]: I0121 00:06:17.654128 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:17 crc kubenswrapper[4873]: I0121 00:06:17.656013 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:17 crc kubenswrapper[4873]: I0121 00:06:17.656101 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:17 crc kubenswrapper[4873]: I0121 00:06:17.656127 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:18 crc kubenswrapper[4873]: I0121 00:06:18.005686 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:49:16.791925397 +0000 UTC Jan 21 00:06:18 crc kubenswrapper[4873]: E0121 00:06:18.142405 4873 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:06:19 crc kubenswrapper[4873]: I0121 00:06:19.006324 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 02:09:58.193442946 +0000 UTC Jan 21 00:06:19 crc kubenswrapper[4873]: E0121 00:06:19.966364 4873 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 00:06:19 crc kubenswrapper[4873]: I0121 00:06:19.995787 4873 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 00:06:20 crc kubenswrapper[4873]: I0121 00:06:20.007397 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:03:31.610343943 +0000 UTC Jan 21 00:06:20 crc kubenswrapper[4873]: W0121 00:06:20.761076 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 00:06:20 crc kubenswrapper[4873]: I0121 00:06:20.761193 4873 trace.go:236] Trace[1969104740]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 00:06:10.759) (total time: 10001ms): Jan 21 00:06:20 crc kubenswrapper[4873]: Trace[1969104740]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:20.761) Jan 21 00:06:20 crc kubenswrapper[4873]: Trace[1969104740]: [10.001515001s] [10.001515001s] END Jan 21 00:06:20 crc kubenswrapper[4873]: E0121 00:06:20.761228 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 00:06:21 crc kubenswrapper[4873]: E0121 00:06:21.002818 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 21 00:06:21 crc kubenswrapper[4873]: I0121 00:06:21.008161 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:30:08.321322606 +0000 UTC Jan 21 00:06:21 crc kubenswrapper[4873]: W0121 00:06:21.073526 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 00:06:21 crc kubenswrapper[4873]: I0121 00:06:21.073665 4873 trace.go:236] Trace[1371057883]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 00:06:11.072) (total time: 10001ms): Jan 21 00:06:21 crc kubenswrapper[4873]: Trace[1371057883]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:21.073) Jan 21 00:06:21 crc kubenswrapper[4873]: Trace[1371057883]: [10.00148559s] [10.00148559s] END Jan 21 00:06:21 crc kubenswrapper[4873]: E0121 00:06:21.073692 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 00:06:21 crc kubenswrapper[4873]: W0121 00:06:21.212371 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 00:06:21 crc kubenswrapper[4873]: I0121 00:06:21.212488 4873 trace.go:236] Trace[1750191803]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 00:06:11.210) (total time: 10001ms): Jan 21 00:06:21 crc kubenswrapper[4873]: Trace[1750191803]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:21.212) Jan 21 00:06:21 crc kubenswrapper[4873]: Trace[1750191803]: [10.001855038s] [10.001855038s] END Jan 21 00:06:21 crc kubenswrapper[4873]: E0121 00:06:21.212515 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 00:06:21 crc kubenswrapper[4873]: E0121 00:06:21.251200 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 00:06:21 crc kubenswrapper[4873]: I0121 00:06:21.308578 4873 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 00:06:21 crc kubenswrapper[4873]: I0121 00:06:21.308668 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 00:06:21 crc kubenswrapper[4873]: I0121 00:06:21.316147 4873 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 00:06:21 crc kubenswrapper[4873]: I0121 00:06:21.316251 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 00:06:22 crc kubenswrapper[4873]: I0121 00:06:22.008583 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:28:16.716536917 +0000 UTC Jan 21 00:06:22 crc kubenswrapper[4873]: I0121 00:06:22.431262 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:22 crc kubenswrapper[4873]: I0121 00:06:22.431439 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:22 crc kubenswrapper[4873]: I0121 00:06:22.432662 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:22 crc kubenswrapper[4873]: I0121 00:06:22.432699 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:22 crc kubenswrapper[4873]: I0121 00:06:22.432710 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:23 crc kubenswrapper[4873]: I0121 00:06:23.009956 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:28:16.566109862 +0000 UTC Jan 21 00:06:23 crc kubenswrapper[4873]: I0121 00:06:23.361433 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:23 crc kubenswrapper[4873]: I0121 00:06:23.361746 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:23 crc kubenswrapper[4873]: I0121 00:06:23.363518 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:23 crc kubenswrapper[4873]: I0121 00:06:23.363570 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:23 crc kubenswrapper[4873]: I0121 00:06:23.363608 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:23 crc kubenswrapper[4873]: I0121 00:06:23.369080 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.010341 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:25:21.475981628 +0000 UTC Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.121863 4873 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.122017 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.160689 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.161849 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.161891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.161905 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.219263 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.236948 4873 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.452252 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.454316 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.454379 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.454417 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:24 crc kubenswrapper[4873]: I0121 00:06:24.454469 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 00:06:24 crc kubenswrapper[4873]: E0121 00:06:24.460935 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 21 00:06:25 crc kubenswrapper[4873]: I0121 00:06:25.010871 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:46:56.226777093 +0000 UTC Jan 21 00:06:25 crc kubenswrapper[4873]: I0121 00:06:25.833653 4873 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 00:06:25 crc kubenswrapper[4873]: I0121 00:06:25.999492 4873 apiserver.go:52] "Watching apiserver" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.004907 4873 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.005387 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.005938 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.006078 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.006128 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.006183 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.006241 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.006787 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.007016 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.007183 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.007317 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.008603 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.009499 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.009685 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.010234 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.010294 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.010390 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.010806 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.011030 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:35:48.907478254 +0000 UTC Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.011691 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.013946 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.045203 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.062473 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.081772 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.099142 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.099915 4873 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.114838 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.132516 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.144273 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.153448 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.165864 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.301121 4873 trace.go:236] Trace[2064672507]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 00:06:11.882) (total time: 14418ms): Jan 21 00:06:26 crc kubenswrapper[4873]: Trace[2064672507]: ---"Objects listed" error: 14418ms (00:06:26.300) Jan 21 00:06:26 crc kubenswrapper[4873]: Trace[2064672507]: [14.418487932s] [14.418487932s] END Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.301180 4873 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.303248 4873 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.365696 4873 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43056->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.365765 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43056->192.168.126.11:17697: read: connection reset by peer" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.365774 4873 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43062->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.365874 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43062->192.168.126.11:17697: read: connection reset by peer" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.366167 4873 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.366233 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404177 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404219 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404237 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404251 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404265 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404280 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404295 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404309 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404323 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404339 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404353 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404368 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404381 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404396 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404414 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404428 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404441 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404457 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404495 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404512 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404526 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404540 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404556 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404582 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404597 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404613 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404628 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404642 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404655 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404668 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404682 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404696 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404714 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404728 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404747 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404767 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404780 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404794 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404809 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404827 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404846 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404861 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404884 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404899 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404912 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404927 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404941 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404956 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404969 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404984 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.404999 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405013 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405028 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405041 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405055 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405069 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405085 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405100 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405114 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405133 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405147 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405161 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405176 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405190 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405205 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405219 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405234 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405249 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405265 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405279 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405293 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405310 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405324 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405339 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405354 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405370 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405384 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405399 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405414 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405429 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405444 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405460 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405475 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405490 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405505 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405521 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405537 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405554 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405585 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405607 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405622 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405640 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405655 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405669 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405683 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405698 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405712 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405727 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405742 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405770 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405786 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405800 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405815 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405830 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405846 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405862 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405878 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405892 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405908 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405925 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405942 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405958 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405972 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.405987 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406002 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406017 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406032 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406047 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406062 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406078 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406093 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406109 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406125 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406142 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406156 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406171 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406186 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406202 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406218 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406234 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406251 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406348 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406366 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406382 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406397 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406412 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406444 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406462 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406477 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406493 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406509 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406526 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406541 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406560 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406589 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406606 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406623 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406641 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406658 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406675 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406692 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406708 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406724 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406748 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406765 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406781 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406799 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406815 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406830 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406847 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406862 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406878 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406893 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406909 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406925 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406942 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406958 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406975 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.406991 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407008 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407024 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407039 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407055 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407072 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407088 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407104 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407121 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407138 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407154 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407171 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407187 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407205 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407222 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407237 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407253 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407272 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407290 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407307 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407323 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407340 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407360 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407378 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407395 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407412 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407430 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407461 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407483 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407502 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407557 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407589 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407608 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407628 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407646 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407663 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407680 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407670 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407695 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407703 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407773 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407806 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407836 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407836 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407840 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407885 4873 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407927 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.407949 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408004 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408040 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408143 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408244 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408364 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408478 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408530 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408620 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408630 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408617 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408855 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408864 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408915 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408950 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.408951 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.409050 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.409162 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.409172 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.409201 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.409213 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.409345 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.409633 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.418071 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.418318 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.418628 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.419079 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.419361 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.419426 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:26.919406595 +0000 UTC m=+19.159274241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.419492 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.419520 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:26.919512568 +0000 UTC m=+19.159380214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.419799 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.419949 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.419846 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.420167 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.420213 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.420405 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.420613 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.420680 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.420819 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.421183 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.421356 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.421414 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.421656 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.421749 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.421685 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.421964 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422055 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422077 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422247 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422498 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422639 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422648 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422728 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422945 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422967 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422978 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.422988 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.423258 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.423610 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.423750 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.423805 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.423950 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.424175 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.424211 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.424301 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.424375 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.424756 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.425175 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.425242 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.425530 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.426062 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.426125 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.426509 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.426916 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.427092 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.427555 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.427948 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.428507 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.428678 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.429061 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.429154 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.429150 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.429185 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.429325 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.429796 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430077 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430129 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430222 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430318 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430339 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430352 4873 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430476 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430500 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430526 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430782 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430920 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.430995 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.431121 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.431151 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.431331 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.431357 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.431224 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.431617 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.431675 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.431936 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.432236 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.432310 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.432440 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.432677 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.432738 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.433304 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.433333 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.433510 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.433897 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.434275 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.434425 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.434526 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.434762 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.435281 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.435335 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.434489 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.435493 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.436024 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.436154 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.436048 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.436194 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.436512 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.436899 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.437152 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.437165 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.437350 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.437487 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.437629 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.437741 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.438311 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.438882 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.438969 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.439575 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.439807 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.439947 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.440925 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.441321 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.441377 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.441517 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.442851 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.443092 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.443132 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.443990 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.444026 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:06:26.944000525 +0000 UTC m=+19.183868271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.444320 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.444674 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.444794 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.445086 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.445775 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.445995 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.446070 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.447047 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.447571 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.447660 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.447867 4873 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.447971 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.448038 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.448093 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.448729 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.448941 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.449069 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.449145 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.449272 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.449434 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.449845 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.449668 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.449736 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.450408 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.450531 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.450569 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.450592 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.450593 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.450642 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:26.950627211 +0000 UTC m=+19.190494857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.451105 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.451514 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.451643 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.451863 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.451874 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.451992 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.452346 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.452428 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.452492 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.452541 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.452675 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.452696 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.452709 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.452973 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.453105 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:26.9530865 +0000 UTC m=+19.192954146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.453521 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.454146 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.454268 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.454552 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.454718 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.455623 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.456951 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.457021 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.457111 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.458671 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.462185 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.462571 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.462654 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.463216 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.475818 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.494420 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.497406 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509265 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509314 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509356 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509366 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509375 4873 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509383 4873 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509391 4873 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509400 4873 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509408 4873 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509416 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509424 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509432 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509440 4873 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509449 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509458 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509466 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509474 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509482 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509490 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509497 4873 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509505 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509514 4873 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509524 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509532 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509540 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509552 4873 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509575 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509584 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509591 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509599 4873 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509607 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509615 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509622 4873 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509630 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509638 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509646 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509655 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509680 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509688 4873 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509696 4873 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509704 4873 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509711 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509718 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509723 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509841 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509863 4873 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509872 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509880 4873 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509887 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509895 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509903 4873 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509910 4873 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509918 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509926 4873 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509933 4873 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509941 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509948 4873 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509956 4873 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509964 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509971 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509979 4873 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509987 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.509995 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510003 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510011 4873 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510019 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510026 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510034 4873 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510042 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510051 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510059 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510067 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510075 4873 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510083 4873 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510090 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510098 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510106 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510114 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510121 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510129 4873 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510137 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510145 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510153 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510160 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510168 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510176 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510183 4873 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510191 4873 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510199 4873 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510207 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510216 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510224 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510232 4873 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510240 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510248 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510257 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510265 4873 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510272 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510280 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510288 4873 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510296 4873 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510304 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510312 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510320 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510328 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510336 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510344 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510352 4873 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510360 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510368 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510375 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510383 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510392 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510399 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510407 4873 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510415 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510422 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510431 4873 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510438 4873 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510446 4873 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510453 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510462 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510469 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510477 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510484 4873 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510492 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510501 4873 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510509 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510517 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510526 4873 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510534 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510543 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510555 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510563 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510588 4873 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510596 4873 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510604 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510613 4873 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510620 4873 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510628 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510636 4873 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510644 4873 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510636 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510652 4873 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510675 4873 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510685 4873 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510694 4873 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510702 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510710 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510718 4873 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510727 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510735 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510743 4873 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510751 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510759 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510766 4873 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510774 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510781 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510789 4873 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510797 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510805 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510812 4873 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510819 4873 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510826 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510835 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510843 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510850 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510857 4873 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510870 4873 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510878 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510886 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510894 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510902 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510910 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510920 4873 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510930 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510940 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510948 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510956 4873 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510964 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510972 4873 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510979 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510987 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.510996 4873 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511003 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511011 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511019 4873 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511027 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511041 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511057 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511065 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511073 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511081 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511089 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.511097 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.512321 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.611957 4873 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.611996 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.620785 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.627951 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.633245 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.641928 4873 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 00:06:26 crc kubenswrapper[4873]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 21 00:06:26 crc kubenswrapper[4873]: set -o allexport Jan 21 00:06:26 crc kubenswrapper[4873]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 21 00:06:26 crc kubenswrapper[4873]: source /etc/kubernetes/apiserver-url.env Jan 21 00:06:26 crc kubenswrapper[4873]: else Jan 21 00:06:26 crc kubenswrapper[4873]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 21 00:06:26 crc kubenswrapper[4873]: exit 1 Jan 21 00:06:26 crc kubenswrapper[4873]: fi Jan 21 00:06:26 crc kubenswrapper[4873]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 21 00:06:26 crc kubenswrapper[4873]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:06:26 crc kubenswrapper[4873]: > logger="UnhandledError" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.643125 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.649298 4873 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 00:06:26 crc kubenswrapper[4873]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 21 00:06:26 crc kubenswrapper[4873]: if [[ -f "/env/_master" ]]; then Jan 21 00:06:26 crc kubenswrapper[4873]: set -o allexport Jan 21 00:06:26 crc kubenswrapper[4873]: source "/env/_master" Jan 21 00:06:26 crc kubenswrapper[4873]: set +o allexport Jan 21 00:06:26 crc kubenswrapper[4873]: fi Jan 21 00:06:26 crc kubenswrapper[4873]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 21 00:06:26 crc kubenswrapper[4873]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 21 00:06:26 crc kubenswrapper[4873]: ho_enable="--enable-hybrid-overlay" Jan 21 00:06:26 crc kubenswrapper[4873]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 21 00:06:26 crc kubenswrapper[4873]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 21 00:06:26 crc kubenswrapper[4873]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 21 00:06:26 crc kubenswrapper[4873]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 00:06:26 crc kubenswrapper[4873]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 21 00:06:26 crc kubenswrapper[4873]: --webhook-host=127.0.0.1 \ Jan 21 00:06:26 crc kubenswrapper[4873]: --webhook-port=9743 \ Jan 21 00:06:26 crc kubenswrapper[4873]: ${ho_enable} \ Jan 21 00:06:26 crc kubenswrapper[4873]: --enable-interconnect \ Jan 21 00:06:26 crc kubenswrapper[4873]: --disable-approver \ Jan 21 00:06:26 crc kubenswrapper[4873]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 21 00:06:26 crc kubenswrapper[4873]: --wait-for-kubernetes-api=200s \ Jan 21 00:06:26 crc kubenswrapper[4873]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 21 00:06:26 crc kubenswrapper[4873]: --loglevel="${LOGLEVEL}" Jan 21 00:06:26 crc kubenswrapper[4873]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:06:26 crc kubenswrapper[4873]: > logger="UnhandledError" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.652482 4873 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 00:06:26 crc kubenswrapper[4873]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 21 00:06:26 crc kubenswrapper[4873]: if [[ -f "/env/_master" ]]; then Jan 21 00:06:26 crc kubenswrapper[4873]: set -o allexport Jan 21 00:06:26 crc kubenswrapper[4873]: source "/env/_master" Jan 21 00:06:26 crc kubenswrapper[4873]: set +o allexport Jan 21 00:06:26 crc kubenswrapper[4873]: fi Jan 21 00:06:26 crc kubenswrapper[4873]: Jan 21 00:06:26 crc kubenswrapper[4873]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 21 00:06:26 crc kubenswrapper[4873]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 00:06:26 crc kubenswrapper[4873]: --disable-webhook \ Jan 21 00:06:26 crc kubenswrapper[4873]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 21 00:06:26 crc kubenswrapper[4873]: --loglevel="${LOGLEVEL}" Jan 21 00:06:26 crc kubenswrapper[4873]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:06:26 crc kubenswrapper[4873]: > logger="UnhandledError" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.652536 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.653644 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 21 00:06:26 crc kubenswrapper[4873]: E0121 00:06:26.653707 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.947331 4873 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 00:06:26 crc kubenswrapper[4873]: I0121 00:06:26.996439 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.008442 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.009995 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.010521 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.011226 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:23:15.151147524 +0000 UTC Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.015740 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.015799 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.015820 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.015880 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:06:28.015863655 +0000 UTC m=+20.255731301 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.015934 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.015951 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.015934 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.015968 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.015971 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.015986 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.015994 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.016004 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:28.015992258 +0000 UTC m=+20.255859904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.016006 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.016031 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.016048 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:28.016033039 +0000 UTC m=+20.255900685 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.016065 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:28.016055859 +0000 UTC m=+20.255923505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.016077 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:27 crc kubenswrapper[4873]: E0121 00:06:27.016109 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:28.016103031 +0000 UTC m=+20.255970667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.017898 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.027064 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.036549 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.047108 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.057398 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.076440 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.088996 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.100688 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.110815 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.119866 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.133411 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.141531 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.169605 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"812038b7f7cb16d8846477734962a8f6c9c6e50172706ecdf3f9914e4ce045ad"} Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.170473 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"47977cfa045f9fa01ff5b589bc47d6639905ae192d5b1fd06ca9a54cb803be97"} Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.171512 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9de6aeb679716384a8b08b2b0bfd1d639609d1dd6b94c6a9bcb224982afe87c8"} Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.175863 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.178872 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029" exitCode=255 Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.178934 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029"} Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.186952 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.187910 4873 scope.go:117] "RemoveContainer" containerID="e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.191243 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.203487 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.214182 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.223476 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.234908 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.248529 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.258771 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.272864 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.290073 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.301112 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.315519 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.328329 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.353023 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.372901 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.418539 4873 csr.go:261] certificate signing request csr-jz9ks is approved, waiting to be issued Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.423981 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.440806 4873 csr.go:257] certificate signing request csr-jz9ks is issued Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.527884 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-zc72l"] Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.528298 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.531808 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.539853 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.540012 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.540748 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.557021 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.566630 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.580234 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.590642 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.595914 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nhxdr"] Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.596290 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nhxdr" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.598177 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.598261 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.600361 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.604475 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.621983 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfqh7\" (UniqueName: \"kubernetes.io/projected/7cbfce16-3ec7-4a22-a4ab-8d354fd56332-kube-api-access-dfqh7\") pod \"node-ca-zc72l\" (UID: \"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\") " pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.622021 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b56634f4-6332-4457-bddd-34a13ac39f0b-hosts-file\") pod \"node-resolver-nhxdr\" (UID: \"b56634f4-6332-4457-bddd-34a13ac39f0b\") " pod="openshift-dns/node-resolver-nhxdr" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.622052 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7cbfce16-3ec7-4a22-a4ab-8d354fd56332-host\") pod \"node-ca-zc72l\" (UID: \"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\") " pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.622067 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7cbfce16-3ec7-4a22-a4ab-8d354fd56332-serviceca\") pod \"node-ca-zc72l\" (UID: \"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\") " pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.622081 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfjds\" (UniqueName: \"kubernetes.io/projected/b56634f4-6332-4457-bddd-34a13ac39f0b-kube-api-access-pfjds\") pod \"node-resolver-nhxdr\" (UID: \"b56634f4-6332-4457-bddd-34a13ac39f0b\") " pod="openshift-dns/node-resolver-nhxdr" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.626633 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.638811 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.656720 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.670472 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.687113 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.701280 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.716755 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.722622 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7cbfce16-3ec7-4a22-a4ab-8d354fd56332-host\") pod \"node-ca-zc72l\" (UID: \"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\") " pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.722657 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7cbfce16-3ec7-4a22-a4ab-8d354fd56332-serviceca\") pod \"node-ca-zc72l\" (UID: \"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\") " pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.722675 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfjds\" (UniqueName: \"kubernetes.io/projected/b56634f4-6332-4457-bddd-34a13ac39f0b-kube-api-access-pfjds\") pod \"node-resolver-nhxdr\" (UID: \"b56634f4-6332-4457-bddd-34a13ac39f0b\") " pod="openshift-dns/node-resolver-nhxdr" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.722710 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfqh7\" (UniqueName: \"kubernetes.io/projected/7cbfce16-3ec7-4a22-a4ab-8d354fd56332-kube-api-access-dfqh7\") pod \"node-ca-zc72l\" (UID: \"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\") " pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.722725 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b56634f4-6332-4457-bddd-34a13ac39f0b-hosts-file\") pod \"node-resolver-nhxdr\" (UID: \"b56634f4-6332-4457-bddd-34a13ac39f0b\") " pod="openshift-dns/node-resolver-nhxdr" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.722786 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b56634f4-6332-4457-bddd-34a13ac39f0b-hosts-file\") pod \"node-resolver-nhxdr\" (UID: \"b56634f4-6332-4457-bddd-34a13ac39f0b\") " pod="openshift-dns/node-resolver-nhxdr" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.722817 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7cbfce16-3ec7-4a22-a4ab-8d354fd56332-host\") pod \"node-ca-zc72l\" (UID: \"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\") " pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.724448 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7cbfce16-3ec7-4a22-a4ab-8d354fd56332-serviceca\") pod \"node-ca-zc72l\" (UID: \"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\") " pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.732612 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.752106 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfqh7\" (UniqueName: \"kubernetes.io/projected/7cbfce16-3ec7-4a22-a4ab-8d354fd56332-kube-api-access-dfqh7\") pod \"node-ca-zc72l\" (UID: \"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\") " pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.762896 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.781968 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfjds\" (UniqueName: \"kubernetes.io/projected/b56634f4-6332-4457-bddd-34a13ac39f0b-kube-api-access-pfjds\") pod \"node-resolver-nhxdr\" (UID: \"b56634f4-6332-4457-bddd-34a13ac39f0b\") " pod="openshift-dns/node-resolver-nhxdr" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.795513 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.809505 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.823839 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.840712 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zc72l" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.847187 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.865673 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.908350 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nhxdr" Jan 21 00:06:27 crc kubenswrapper[4873]: I0121 00:06:27.909686 4873 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 00:06:27 crc kubenswrapper[4873]: W0121 00:06:27.910056 4873 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 00:06:27 crc kubenswrapper[4873]: W0121 00:06:27.910091 4873 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 21 00:06:27 crc kubenswrapper[4873]: W0121 00:06:27.910116 4873 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 21 00:06:27 crc kubenswrapper[4873]: W0121 00:06:27.910211 4873 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 00:06:27 crc kubenswrapper[4873]: W0121 00:06:27.910235 4873 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 00:06:27 crc kubenswrapper[4873]: W0121 00:06:27.910224 4873 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 00:06:27 crc kubenswrapper[4873]: W0121 00:06:27.910104 4873 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 21 00:06:27 crc kubenswrapper[4873]: W0121 00:06:27.910223 4873 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 21 00:06:27 crc kubenswrapper[4873]: W0121 00:06:27.936229 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb56634f4_6332_4457_bddd_34a13ac39f0b.slice/crio-65652523f0eb21977f37d8a499b5782f7d045aad68576de1b427d3fc73d8bbc0 WatchSource:0}: Error finding container 65652523f0eb21977f37d8a499b5782f7d045aad68576de1b427d3fc73d8bbc0: Status 404 returned error can't find the container with id 65652523f0eb21977f37d8a499b5782f7d045aad68576de1b427d3fc73d8bbc0 Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.011369 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:25:38.613709203 +0000 UTC Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.025497 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.025600 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.025685 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:06:30.025652805 +0000 UTC m=+22.265520451 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.025701 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.025766 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:30.025759157 +0000 UTC m=+22.265626803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.025854 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.025901 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.025933 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.026013 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.026041 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.026059 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.026070 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.026072 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:30.026057064 +0000 UTC m=+22.265924710 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.026130 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:30.026115685 +0000 UTC m=+22.265983331 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.026209 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.026270 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.026288 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.026371 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:30.026342441 +0000 UTC m=+22.266210087 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.063206 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.063342 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.063621 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.063673 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.063790 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:28 crc kubenswrapper[4873]: E0121 00:06:28.063853 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.066893 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.067471 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.068709 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.069284 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.070333 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.070939 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.071512 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.072586 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.073172 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.077005 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.077477 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.078660 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.079123 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.079667 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.081693 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.082241 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.083777 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.084146 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.084731 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.085836 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.086081 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.086794 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.087907 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.089107 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.092379 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.093012 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.094482 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.095256 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.096315 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.096931 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.099868 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.100342 4873 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.100441 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.105367 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.108929 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.109738 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.110332 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.112227 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.113246 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.113815 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.115394 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.116186 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.117180 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.117837 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.120412 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.121990 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.122537 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.122819 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.124650 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.125178 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.126590 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.127197 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.128536 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.129526 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.130122 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.131546 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.132029 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.144145 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.161505 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.184845 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zc72l" event={"ID":"7cbfce16-3ec7-4a22-a4ab-8d354fd56332","Type":"ContainerStarted","Data":"81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528"} Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.184895 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zc72l" event={"ID":"7cbfce16-3ec7-4a22-a4ab-8d354fd56332","Type":"ContainerStarted","Data":"68b2a94eab111e99a5160c082c8f6f7236a026ac9f2af68e0edccb4d40582d55"} Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.184953 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.187509 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.189700 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0"} Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.190201 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.193607 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nhxdr" event={"ID":"b56634f4-6332-4457-bddd-34a13ac39f0b","Type":"ContainerStarted","Data":"65652523f0eb21977f37d8a499b5782f7d045aad68576de1b427d3fc73d8bbc0"} Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.197272 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac"} Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.197311 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c"} Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.199802 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d"} Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.200954 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.224187 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.248462 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.267907 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.310211 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.363025 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ppcbs"] Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.363370 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.369458 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.369795 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.369840 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.369811 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.370589 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.371102 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rd4h7"] Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.371786 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-nfrvx"] Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.371983 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.372104 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.374414 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.374746 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.375547 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.375917 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.375921 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.376042 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.378086 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.378435 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.411792 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429104 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9fc5804-a6f3-4b7e-b115-68275cb68417-proxy-tls\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429141 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zm5\" (UniqueName: \"kubernetes.io/projected/fc2b4503-97f2-44cb-a1ad-e558df352294-kube-api-access-h4zm5\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429160 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a9fc5804-a6f3-4b7e-b115-68275cb68417-rootfs\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429176 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-hostroot\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429193 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-cnibin\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429210 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-daemon-config\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429225 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-system-cni-dir\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429262 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-run-netns\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429278 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f51bbfee-1a9c-46e8-81aa-e6359268a146-cni-binary-copy\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429334 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-var-lib-kubelet\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429348 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429363 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9fc5804-a6f3-4b7e-b115-68275cb68417-mcd-auth-proxy-config\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429406 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-var-lib-cni-bin\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429422 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-conf-dir\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429483 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-socket-dir-parent\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429498 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjffn\" (UniqueName: \"kubernetes.io/projected/f51bbfee-1a9c-46e8-81aa-e6359268a146-kube-api-access-sjffn\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429538 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bh9d\" (UniqueName: \"kubernetes.io/projected/a9fc5804-a6f3-4b7e-b115-68275cb68417-kube-api-access-6bh9d\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429590 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-etc-kubernetes\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429604 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-os-release\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429617 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-var-lib-cni-multus\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429631 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-run-multus-certs\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429686 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-os-release\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429702 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-cni-dir\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429743 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-cnibin\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429760 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fc2b4503-97f2-44cb-a1ad-e558df352294-cni-binary-copy\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429775 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f51bbfee-1a9c-46e8-81aa-e6359268a146-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429817 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-system-cni-dir\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.429843 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-run-k8s-cni-cncf-io\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.430080 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.442032 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 00:01:27 +0000 UTC, rotation deadline is 2026-11-09 05:51:58.178716689 +0000 UTC Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.442107 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7013h45m29.736611741s for next certificate rotation Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.455551 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.472682 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.484163 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.496683 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.509354 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.519677 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.530907 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a9fc5804-a6f3-4b7e-b115-68275cb68417-rootfs\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.530979 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9fc5804-a6f3-4b7e-b115-68275cb68417-proxy-tls\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531026 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a9fc5804-a6f3-4b7e-b115-68275cb68417-rootfs\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531002 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zm5\" (UniqueName: \"kubernetes.io/projected/fc2b4503-97f2-44cb-a1ad-e558df352294-kube-api-access-h4zm5\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531727 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-hostroot\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531753 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-cnibin\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531780 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-daemon-config\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531804 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-system-cni-dir\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531823 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-run-netns\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531832 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-hostroot\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531845 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-cnibin\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531858 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f51bbfee-1a9c-46e8-81aa-e6359268a146-cni-binary-copy\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531883 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9fc5804-a6f3-4b7e-b115-68275cb68417-mcd-auth-proxy-config\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531887 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-run-netns\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531903 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-var-lib-cni-bin\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531922 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-var-lib-kubelet\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531939 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531971 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-socket-dir-parent\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531989 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-conf-dir\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532004 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjffn\" (UniqueName: \"kubernetes.io/projected/f51bbfee-1a9c-46e8-81aa-e6359268a146-kube-api-access-sjffn\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532023 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bh9d\" (UniqueName: \"kubernetes.io/projected/a9fc5804-a6f3-4b7e-b115-68275cb68417-kube-api-access-6bh9d\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532038 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-etc-kubernetes\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532056 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-cni-dir\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532074 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-cnibin\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532092 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-os-release\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532109 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-var-lib-cni-multus\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532130 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-run-multus-certs\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.531899 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-system-cni-dir\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532152 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-os-release\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532155 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-conf-dir\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532178 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fc2b4503-97f2-44cb-a1ad-e558df352294-cni-binary-copy\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532194 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f51bbfee-1a9c-46e8-81aa-e6359268a146-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532198 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-var-lib-kubelet\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532211 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-system-cni-dir\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532241 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-run-k8s-cni-cncf-io\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532313 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-run-k8s-cni-cncf-io\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532405 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-socket-dir-parent\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532452 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-run-multus-certs\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532473 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-cni-dir\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532467 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-var-lib-cni-bin\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532504 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-etc-kubernetes\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532534 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-system-cni-dir\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532553 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532590 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-cnibin\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532607 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f51bbfee-1a9c-46e8-81aa-e6359268a146-os-release\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532639 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-os-release\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532641 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fc2b4503-97f2-44cb-a1ad-e558df352294-multus-daemon-config\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532661 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f51bbfee-1a9c-46e8-81aa-e6359268a146-cni-binary-copy\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.532853 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9fc5804-a6f3-4b7e-b115-68275cb68417-mcd-auth-proxy-config\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.533018 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fc2b4503-97f2-44cb-a1ad-e558df352294-cni-binary-copy\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.533097 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f51bbfee-1a9c-46e8-81aa-e6359268a146-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.533169 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fc2b4503-97f2-44cb-a1ad-e558df352294-host-var-lib-cni-multus\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.533267 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.535894 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9fc5804-a6f3-4b7e-b115-68275cb68417-proxy-tls\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.547592 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.547742 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zm5\" (UniqueName: \"kubernetes.io/projected/fc2b4503-97f2-44cb-a1ad-e558df352294-kube-api-access-h4zm5\") pod \"multus-nfrvx\" (UID: \"fc2b4503-97f2-44cb-a1ad-e558df352294\") " pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.549015 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjffn\" (UniqueName: \"kubernetes.io/projected/f51bbfee-1a9c-46e8-81aa-e6359268a146-kube-api-access-sjffn\") pod \"multus-additional-cni-plugins-rd4h7\" (UID: \"f51bbfee-1a9c-46e8-81aa-e6359268a146\") " pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.550172 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bh9d\" (UniqueName: \"kubernetes.io/projected/a9fc5804-a6f3-4b7e-b115-68275cb68417-kube-api-access-6bh9d\") pod \"machine-config-daemon-ppcbs\" (UID: \"a9fc5804-a6f3-4b7e-b115-68275cb68417\") " pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.568209 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.587197 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.599370 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.616227 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.628798 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.646381 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.662929 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.681287 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.688193 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.690754 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.694679 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nfrvx" Jan 21 00:06:28 crc kubenswrapper[4873]: W0121 00:06:28.724745 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc2b4503_97f2_44cb_a1ad_e558df352294.slice/crio-6fb8691826eee0a67f4b1b57c5132fd427d67ae2715f119d275f2d553172171c WatchSource:0}: Error finding container 6fb8691826eee0a67f4b1b57c5132fd427d67ae2715f119d275f2d553172171c: Status 404 returned error can't find the container with id 6fb8691826eee0a67f4b1b57c5132fd427d67ae2715f119d275f2d553172171c Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.732340 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.771963 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbp72"] Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.772297 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.773203 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.776692 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.776967 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.777265 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.779930 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.798752 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.818326 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.834825 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.834871 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-kubelet\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.834891 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-slash\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.834908 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-netns\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.834924 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-netd\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.834940 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rt4d\" (UniqueName: \"kubernetes.io/projected/12879027-cbf4-4393-a71e-2a42d8c9f0fe-kube-api-access-8rt4d\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.834969 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-etc-openvswitch\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.834984 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-ovn\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.834999 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-env-overrides\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835026 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-systemd\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835067 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-var-lib-openvswitch\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835091 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovn-node-metrics-cert\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835194 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-config\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835312 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-node-log\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835337 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-bin\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835359 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-log-socket\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835445 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-systemd-units\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835469 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-openvswitch\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835501 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.835525 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-script-lib\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.839234 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.873243 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.921520 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.921753 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936058 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-node-log\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936095 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-bin\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936112 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-log-socket\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936133 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-systemd-units\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936149 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-openvswitch\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936165 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936183 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-script-lib\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936189 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-bin\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936207 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936165 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-node-log\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936239 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-kubelet\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936254 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-slash\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936244 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-log-socket\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936283 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936295 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-netns\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936255 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936269 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-netns\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936293 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-openvswitch\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936443 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-slash\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936452 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-systemd-units\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936498 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-netd\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936531 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-kubelet\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936577 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rt4d\" (UniqueName: \"kubernetes.io/projected/12879027-cbf4-4393-a71e-2a42d8c9f0fe-kube-api-access-8rt4d\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936733 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-netd\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936746 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-etc-openvswitch\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936767 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-etc-openvswitch\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936798 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-ovn\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936815 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-env-overrides\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936852 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-systemd\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936873 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-var-lib-openvswitch\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936896 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovn-node-metrics-cert\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.936915 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-config\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.937230 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-ovn\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.937268 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-var-lib-openvswitch\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.937292 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-systemd\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.939732 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-script-lib\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.940103 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-env-overrides\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.940257 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-config\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.942902 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovn-node-metrics-cert\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.976432 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rt4d\" (UniqueName: \"kubernetes.io/projected/12879027-cbf4-4393-a71e-2a42d8c9f0fe-kube-api-access-8rt4d\") pod \"ovnkube-node-hbp72\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:28 crc kubenswrapper[4873]: I0121 00:06:28.997858 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.002836 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.012281 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:58:06.904813877 +0000 UTC Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.018945 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.058398 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.089492 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.128956 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.168995 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.172532 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: W0121 00:06:29.195267 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12879027_cbf4_4393_a71e_2a42d8c9f0fe.slice/crio-4b7c0c75b706352fe70be863e2cf7e97d65cca8d72629e486a1110a5f310e62d WatchSource:0}: Error finding container 4b7c0c75b706352fe70be863e2cf7e97d65cca8d72629e486a1110a5f310e62d: Status 404 returned error can't find the container with id 4b7c0c75b706352fe70be863e2cf7e97d65cca8d72629e486a1110a5f310e62d Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.201985 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nfrvx" event={"ID":"fc2b4503-97f2-44cb-a1ad-e558df352294","Type":"ContainerStarted","Data":"4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5"} Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.202030 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nfrvx" event={"ID":"fc2b4503-97f2-44cb-a1ad-e558df352294","Type":"ContainerStarted","Data":"6fb8691826eee0a67f4b1b57c5132fd427d67ae2715f119d275f2d553172171c"} Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.203627 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13"} Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.203653 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46"} Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.203662 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"6b243cf1d11fe41637936a020c993806659a35c2009e96e28e67d38a91f70228"} Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.205087 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"4b7c0c75b706352fe70be863e2cf7e97d65cca8d72629e486a1110a5f310e62d"} Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.206658 4873 generic.go:334] "Generic (PLEG): container finished" podID="f51bbfee-1a9c-46e8-81aa-e6359268a146" containerID="c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2" exitCode=0 Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.206735 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" event={"ID":"f51bbfee-1a9c-46e8-81aa-e6359268a146","Type":"ContainerDied","Data":"c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2"} Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.206765 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" event={"ID":"f51bbfee-1a9c-46e8-81aa-e6359268a146","Type":"ContainerStarted","Data":"9cb85db4a349e4f25609277ae915d5ea15702cd92de85407210ca14d5d2f3b99"} Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.208133 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nhxdr" event={"ID":"b56634f4-6332-4457-bddd-34a13ac39f0b","Type":"ContainerStarted","Data":"44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b"} Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.215301 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.253463 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.288791 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.328421 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.365325 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.378965 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.427650 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.474393 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.510582 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.519187 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.538356 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.588334 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.632438 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.667014 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.704998 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.754488 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.792036 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.833399 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.866607 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.918464 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.947833 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:29 crc kubenswrapper[4873]: I0121 00:06:29.986623 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.012911 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:59:37.048069308 +0000 UTC Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.027882 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.047752 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.047939 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:06:34.047900987 +0000 UTC m=+26.287768683 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.048002 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.048042 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.048077 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.048104 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048190 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048192 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048259 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048243 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:34.048232735 +0000 UTC m=+26.288100381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048328 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048360 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048362 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:34.048342807 +0000 UTC m=+26.288210493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048357 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048416 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048437 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048440 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:34.048414109 +0000 UTC m=+26.288281775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.048502 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:34.048488271 +0000 UTC m=+26.288356037 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.063326 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.063434 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.063328 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.063496 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.063520 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.063585 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.068303 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.105914 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.148069 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.212201 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17"} Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.214360 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523" exitCode=0 Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.214418 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523"} Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.217001 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" event={"ID":"f51bbfee-1a9c-46e8-81aa-e6359268a146","Type":"ContainerDied","Data":"5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435"} Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.216967 4873 generic.go:334] "Generic (PLEG): container finished" podID="f51bbfee-1a9c-46e8-81aa-e6359268a146" containerID="5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435" exitCode=0 Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.231539 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.252794 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.273002 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.307516 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.350269 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.387147 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.427012 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.467602 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.505516 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.561424 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.588691 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.626049 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.682496 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.710999 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.747227 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.787646 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.826782 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.861812 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.863428 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.863484 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.863497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.863685 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.868202 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.919632 4873 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.919966 4873 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.921074 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.921098 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.921106 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.921120 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.921129 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:30Z","lastTransitionTime":"2026-01-21T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.940220 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.944775 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.944815 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.944823 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.944840 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.944850 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:30Z","lastTransitionTime":"2026-01-21T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.954834 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.959404 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.963069 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.963108 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.963117 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.963130 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.963139 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:30Z","lastTransitionTime":"2026-01-21T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.979334 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.983088 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.983142 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.983159 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.983183 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.983202 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:30Z","lastTransitionTime":"2026-01-21T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:30 crc kubenswrapper[4873]: I0121 00:06:30.987063 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:30 crc kubenswrapper[4873]: E0121 00:06:30.997406 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.001358 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.001408 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.001421 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.001440 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.001669 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.013206 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:50:21.226967005 +0000 UTC Jan 21 00:06:31 crc kubenswrapper[4873]: E0121 00:06:31.018756 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: E0121 00:06:31.018933 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.020448 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.020482 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.020494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.020510 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.020523 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.027799 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.068122 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.113169 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.122949 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.122988 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.122999 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.123018 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.123051 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.127983 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.131517 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.151328 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.165812 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.207948 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.222369 4873 generic.go:334] "Generic (PLEG): container finished" podID="f51bbfee-1a9c-46e8-81aa-e6359268a146" containerID="f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728" exitCode=0 Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.222466 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" event={"ID":"f51bbfee-1a9c-46e8-81aa-e6359268a146","Type":"ContainerDied","Data":"f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.224492 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.224526 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.224536 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.224554 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.224582 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.226198 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.226278 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.226296 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.226312 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.226331 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.226347 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.268651 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.290754 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.326966 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.327025 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.327034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.327051 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.327062 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.329332 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.364883 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.407955 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.429183 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.429228 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.429237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.429251 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.429261 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.448672 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.489199 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.526922 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.531501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.531534 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.531545 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.531586 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.531596 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.569396 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.623473 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.634447 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.634475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.634483 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.634494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.634502 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.646926 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.693463 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.729523 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.737534 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.737607 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.737621 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.737649 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.737664 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.771113 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.812948 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.840441 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.840490 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.840505 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.840526 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.840540 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.854272 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.890494 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.940392 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.943930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.944051 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.944078 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.944109 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:31 crc kubenswrapper[4873]: I0121 00:06:31.944131 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:31Z","lastTransitionTime":"2026-01-21T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.013669 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:33:16.759029306 +0000 UTC Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.047438 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.047505 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.047527 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.047599 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.047626 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:32Z","lastTransitionTime":"2026-01-21T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.063410 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.063526 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:32 crc kubenswrapper[4873]: E0121 00:06:32.063674 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:32 crc kubenswrapper[4873]: E0121 00:06:32.063814 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.063940 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:32 crc kubenswrapper[4873]: E0121 00:06:32.064054 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.149924 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.149988 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.150000 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.150018 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.150028 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:32Z","lastTransitionTime":"2026-01-21T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.234407 4873 generic.go:334] "Generic (PLEG): container finished" podID="f51bbfee-1a9c-46e8-81aa-e6359268a146" containerID="1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237" exitCode=0 Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.234475 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" event={"ID":"f51bbfee-1a9c-46e8-81aa-e6359268a146","Type":"ContainerDied","Data":"1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.253425 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.253484 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.253500 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.253521 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.253534 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:32Z","lastTransitionTime":"2026-01-21T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.260438 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.283173 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.301644 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.325423 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.338932 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.355160 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.356382 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.356419 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.356428 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.356443 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.356452 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:32Z","lastTransitionTime":"2026-01-21T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.366928 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.381327 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.395219 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.409969 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.421908 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.435948 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.445749 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.458834 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.458874 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.458882 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.458903 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.458912 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:32Z","lastTransitionTime":"2026-01-21T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.488785 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.532107 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.560944 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.561001 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.561013 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.561031 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.561042 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:32Z","lastTransitionTime":"2026-01-21T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.663804 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.663857 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.663874 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.663895 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.663910 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:32Z","lastTransitionTime":"2026-01-21T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.766998 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.767048 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.767064 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.767086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.767102 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:32Z","lastTransitionTime":"2026-01-21T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.869993 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.870055 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.870074 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.870100 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.870117 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:32Z","lastTransitionTime":"2026-01-21T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.974945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.975040 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.975067 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.975100 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:32 crc kubenswrapper[4873]: I0121 00:06:32.975120 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:32Z","lastTransitionTime":"2026-01-21T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.013878 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:28:45.570680699 +0000 UTC Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.077598 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.077638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.077646 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.077660 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.077669 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:33Z","lastTransitionTime":"2026-01-21T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.180728 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.180789 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.180809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.180833 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.180850 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:33Z","lastTransitionTime":"2026-01-21T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.247041 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.251023 4873 generic.go:334] "Generic (PLEG): container finished" podID="f51bbfee-1a9c-46e8-81aa-e6359268a146" containerID="0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da" exitCode=0 Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.251092 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" event={"ID":"f51bbfee-1a9c-46e8-81aa-e6359268a146","Type":"ContainerDied","Data":"0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.271160 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.284127 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.284192 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.284216 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.284248 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.284271 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:33Z","lastTransitionTime":"2026-01-21T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.310999 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.336666 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.354838 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.373929 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.387770 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.387825 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.387841 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.387865 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.387883 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:33Z","lastTransitionTime":"2026-01-21T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.394444 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.411078 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.428321 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.445595 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.455724 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.465098 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.478238 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.490677 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.490734 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.490748 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.490765 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.490776 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:33Z","lastTransitionTime":"2026-01-21T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.493282 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.507020 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.521650 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.592299 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.592359 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.592370 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.592389 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.592401 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:33Z","lastTransitionTime":"2026-01-21T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.695107 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.695146 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.695156 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.695172 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.695182 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:33Z","lastTransitionTime":"2026-01-21T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.798607 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.798656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.798672 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.798692 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.798707 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:33Z","lastTransitionTime":"2026-01-21T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.900933 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.900983 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.900996 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.901011 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:33 crc kubenswrapper[4873]: I0121 00:06:33.901023 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:33Z","lastTransitionTime":"2026-01-21T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.003861 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.003927 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.003949 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.003979 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.004002 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:34Z","lastTransitionTime":"2026-01-21T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.014043 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:11:00.679480423 +0000 UTC Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.062477 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.062644 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.062650 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.063023 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.062853 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.063096 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.092621 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.092842 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:06:42.092811162 +0000 UTC m=+34.332678848 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.092903 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.092959 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.093008 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.093053 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093120 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093139 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093157 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093174 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093194 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093216 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:42.093190251 +0000 UTC m=+34.333057947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093242 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:42.093226972 +0000 UTC m=+34.333094668 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093268 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:42.093256612 +0000 UTC m=+34.333124278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093306 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093330 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093349 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:34 crc kubenswrapper[4873]: E0121 00:06:34.093401 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:42.093385365 +0000 UTC m=+34.333253051 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.107223 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.107278 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.107295 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.107317 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.107334 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:34Z","lastTransitionTime":"2026-01-21T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.210823 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.210889 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.210913 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.210942 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.210965 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:34Z","lastTransitionTime":"2026-01-21T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.258255 4873 generic.go:334] "Generic (PLEG): container finished" podID="f51bbfee-1a9c-46e8-81aa-e6359268a146" containerID="0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8" exitCode=0 Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.258305 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" event={"ID":"f51bbfee-1a9c-46e8-81aa-e6359268a146","Type":"ContainerDied","Data":"0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.289829 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.309800 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.314040 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.314108 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.314127 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.314154 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.314172 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:34Z","lastTransitionTime":"2026-01-21T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.327044 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.346864 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.368698 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.386416 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.399491 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.414981 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.417052 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.417089 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.417097 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.417112 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.417124 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:34Z","lastTransitionTime":"2026-01-21T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.431422 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.446258 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.461053 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.486301 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.498573 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.511312 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.519758 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.519790 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.519802 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.519819 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.519833 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:34Z","lastTransitionTime":"2026-01-21T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.525786 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.621884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.621940 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.621952 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.621973 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.621988 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:34Z","lastTransitionTime":"2026-01-21T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.724246 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.724289 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.724298 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.724311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.724321 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:34Z","lastTransitionTime":"2026-01-21T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.827212 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.827335 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.827376 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.827412 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.827435 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:34Z","lastTransitionTime":"2026-01-21T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.930589 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.930683 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.930719 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.930753 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:34 crc kubenswrapper[4873]: I0121 00:06:34.930775 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:34Z","lastTransitionTime":"2026-01-21T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.014791 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:17:41.055928267 +0000 UTC Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.034037 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.034085 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.034106 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.034137 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.034165 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:35Z","lastTransitionTime":"2026-01-21T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.137395 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.137792 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.137802 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.137816 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.137826 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:35Z","lastTransitionTime":"2026-01-21T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.240415 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.240453 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.240461 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.240474 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.240482 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:35Z","lastTransitionTime":"2026-01-21T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.264954 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" event={"ID":"f51bbfee-1a9c-46e8-81aa-e6359268a146","Type":"ContainerStarted","Data":"c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.279443 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.289023 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.305837 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.326877 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.338627 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.343084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.343118 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.343129 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.343146 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.343158 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:35Z","lastTransitionTime":"2026-01-21T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.350780 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.362459 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.381375 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.394489 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.406571 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.418505 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.434359 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.445450 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.445479 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.445490 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.445506 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.445525 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:35Z","lastTransitionTime":"2026-01-21T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.448589 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.459367 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.470298 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.549405 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.549442 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.549450 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.549463 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.549472 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:35Z","lastTransitionTime":"2026-01-21T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.653097 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.653162 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.653178 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.653204 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.653222 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:35Z","lastTransitionTime":"2026-01-21T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.757603 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.757686 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.757712 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.757745 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.757766 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:35Z","lastTransitionTime":"2026-01-21T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.860359 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.860404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.860415 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.860431 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.860442 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:35Z","lastTransitionTime":"2026-01-21T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.963170 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.963231 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.963241 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.963264 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:35 crc kubenswrapper[4873]: I0121 00:06:35.963283 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:35Z","lastTransitionTime":"2026-01-21T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.015278 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:37:54.824608963 +0000 UTC Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.063199 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.063256 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.063261 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:36 crc kubenswrapper[4873]: E0121 00:06:36.063400 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:36 crc kubenswrapper[4873]: E0121 00:06:36.063520 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:36 crc kubenswrapper[4873]: E0121 00:06:36.063618 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.064993 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.065024 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.065036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.065077 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.065091 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:36Z","lastTransitionTime":"2026-01-21T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.168462 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.168586 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.168611 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.168640 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.168666 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:36Z","lastTransitionTime":"2026-01-21T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.270857 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.270910 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.270927 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.270950 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.270970 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:36Z","lastTransitionTime":"2026-01-21T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.273802 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.274222 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.302483 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.312535 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.326429 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.341423 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.354874 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.365609 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.373895 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.373998 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.374018 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.374042 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.374061 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:36Z","lastTransitionTime":"2026-01-21T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.377478 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.397085 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.427747 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.448657 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.462290 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.475083 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.476871 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.476922 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.476940 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.476963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.476978 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:36Z","lastTransitionTime":"2026-01-21T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.496440 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.509725 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.522897 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.535738 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.551598 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.566037 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.579414 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.579501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.579519 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.579541 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.579605 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:36Z","lastTransitionTime":"2026-01-21T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.583286 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.610783 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.625074 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.637591 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.649906 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.668814 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.681334 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.681695 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.681755 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.681774 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.681798 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.681818 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:36Z","lastTransitionTime":"2026-01-21T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.695019 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.706967 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.718106 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.728739 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.743477 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.759567 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.784307 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.784347 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.784357 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.784372 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.784382 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:36Z","lastTransitionTime":"2026-01-21T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.886814 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.886852 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.886860 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.886873 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.886882 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:36Z","lastTransitionTime":"2026-01-21T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.989163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.989222 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.989233 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.989247 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:36 crc kubenswrapper[4873]: I0121 00:06:36.989256 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:36Z","lastTransitionTime":"2026-01-21T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.016020 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:49:36.306298246 +0000 UTC Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.095843 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.095896 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.095914 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.095938 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.096664 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:37Z","lastTransitionTime":"2026-01-21T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.199042 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.199085 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.199096 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.199113 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.199124 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:37Z","lastTransitionTime":"2026-01-21T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.277022 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.277728 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.302335 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.302831 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.302861 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.302893 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.302919 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:37Z","lastTransitionTime":"2026-01-21T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.307626 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.330655 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.350154 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.376389 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.405664 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.405727 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.405743 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.405764 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.405780 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:37Z","lastTransitionTime":"2026-01-21T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.408346 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.424763 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.443130 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.464228 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.485059 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.497136 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.508343 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.508386 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.508395 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.508409 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.508418 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:37Z","lastTransitionTime":"2026-01-21T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.511540 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.523915 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.546823 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.561525 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.576879 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.589038 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.610720 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.610764 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.610775 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.610791 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.610801 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:37Z","lastTransitionTime":"2026-01-21T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.713486 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.713530 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.713541 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.713577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.713593 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:37Z","lastTransitionTime":"2026-01-21T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.815754 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.815801 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.815812 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.815829 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.815842 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:37Z","lastTransitionTime":"2026-01-21T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.917778 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.917824 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.917835 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.917852 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:37 crc kubenswrapper[4873]: I0121 00:06:37.917864 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:37Z","lastTransitionTime":"2026-01-21T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.017210 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:32:52.578591295 +0000 UTC Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.021093 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.021160 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.021174 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.021217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.021233 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:38Z","lastTransitionTime":"2026-01-21T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.062801 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.062861 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.062911 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:38 crc kubenswrapper[4873]: E0121 00:06:38.062987 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:38 crc kubenswrapper[4873]: E0121 00:06:38.063082 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:38 crc kubenswrapper[4873]: E0121 00:06:38.063506 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.082917 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.095705 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.109518 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.121789 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.123432 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.123482 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.123493 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.123517 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.123531 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:38Z","lastTransitionTime":"2026-01-21T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.137851 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.158076 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.176608 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.190127 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.201691 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.213514 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.226039 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.226083 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.226095 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.226114 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.226127 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:38Z","lastTransitionTime":"2026-01-21T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.229905 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.260578 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.278797 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.281052 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.296544 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.319421 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.329281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.329337 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.329355 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.329376 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.329391 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:38Z","lastTransitionTime":"2026-01-21T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.432906 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.432970 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.432988 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.433012 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.433029 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:38Z","lastTransitionTime":"2026-01-21T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.536384 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.536451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.536471 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.536490 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.536506 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:38Z","lastTransitionTime":"2026-01-21T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.639863 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.639914 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.639927 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.639945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.639957 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:38Z","lastTransitionTime":"2026-01-21T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.741993 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.742036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.742045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.742059 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.742068 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:38Z","lastTransitionTime":"2026-01-21T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.844914 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.844972 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.844983 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.844999 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.845008 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:38Z","lastTransitionTime":"2026-01-21T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.948198 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.948257 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.948274 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.948321 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:38 crc kubenswrapper[4873]: I0121 00:06:38.948338 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:38Z","lastTransitionTime":"2026-01-21T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.017728 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:46:33.217606243 +0000 UTC Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.051942 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.052048 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.052097 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.052127 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.052148 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:39Z","lastTransitionTime":"2026-01-21T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.060325 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.084620 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.105246 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.126930 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.148502 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.154896 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.154949 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.154963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.154981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.154993 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:39Z","lastTransitionTime":"2026-01-21T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.171120 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.186542 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.199258 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.218525 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.236665 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.257349 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.257412 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.257427 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.257450 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.257467 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:39Z","lastTransitionTime":"2026-01-21T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.260025 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.275413 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.285309 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/0.log" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.287929 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad" exitCode=1 Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.288000 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.288481 4873 scope.go:117] "RemoveContainer" containerID="0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.291924 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.312700 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.329386 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.345948 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.359732 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.359776 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.359811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.359826 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.359836 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:39Z","lastTransitionTime":"2026-01-21T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.362341 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.374115 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.387282 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.399658 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.420457 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"message\\\":\\\"Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.653416 6186 factory.go:656] Stopping watch factory\\\\nI0121 00:06:38.653429 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 00:06:38.653739 6186 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.653937 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.654083 6186 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654126 6186 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654175 6186 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.431864 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.443079 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.456851 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.461846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.461869 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.461878 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.461890 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.461899 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:39Z","lastTransitionTime":"2026-01-21T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.470685 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.485389 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.499338 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.509174 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.526130 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.536760 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.547017 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.563915 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.563958 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.563967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.563981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.563990 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:39Z","lastTransitionTime":"2026-01-21T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.666195 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.666277 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.666320 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.666424 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.666468 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:39Z","lastTransitionTime":"2026-01-21T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.769383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.769426 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.769436 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.769450 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.769460 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:39Z","lastTransitionTime":"2026-01-21T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.873459 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.873596 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.873622 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.873657 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.873680 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:39Z","lastTransitionTime":"2026-01-21T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.975991 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.976040 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.976049 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.976062 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.976072 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:39Z","lastTransitionTime":"2026-01-21T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:39 crc kubenswrapper[4873]: I0121 00:06:39.998043 4873 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.017947 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:43:53.959951111 +0000 UTC Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.062764 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.062794 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.062853 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:40 crc kubenswrapper[4873]: E0121 00:06:40.062924 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:40 crc kubenswrapper[4873]: E0121 00:06:40.063063 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:40 crc kubenswrapper[4873]: E0121 00:06:40.063233 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.078255 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.078302 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.078311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.078328 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.078339 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:40Z","lastTransitionTime":"2026-01-21T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.181227 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.181265 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.181276 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.181291 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.181301 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:40Z","lastTransitionTime":"2026-01-21T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.284052 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.284101 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.284115 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.284136 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.284150 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:40Z","lastTransitionTime":"2026-01-21T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.292591 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/0.log" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.295458 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac"} Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.295600 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.322847 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.345642 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.364925 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.377270 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.385915 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.385946 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.385954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.385967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.385976 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:40Z","lastTransitionTime":"2026-01-21T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.390180 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.399289 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.413488 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.434514 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"message\\\":\\\"Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.653416 6186 factory.go:656] Stopping watch factory\\\\nI0121 00:06:38.653429 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 00:06:38.653739 6186 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.653937 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.654083 6186 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654126 6186 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654175 6186 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.447662 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.458832 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.472857 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.488815 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.488862 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.488873 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.488892 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.488903 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:40Z","lastTransitionTime":"2026-01-21T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.491736 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.502895 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.514490 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.524372 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.591457 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.591496 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.591506 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.591519 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.591531 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:40Z","lastTransitionTime":"2026-01-21T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.694495 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.694609 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.694647 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.694681 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.694706 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:40Z","lastTransitionTime":"2026-01-21T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.797807 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.797899 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.797930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.797965 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.797986 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:40Z","lastTransitionTime":"2026-01-21T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.901287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.901351 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.901363 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.901380 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:40 crc kubenswrapper[4873]: I0121 00:06:40.901393 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:40Z","lastTransitionTime":"2026-01-21T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.003632 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.003772 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.003846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.003871 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.003888 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:41Z","lastTransitionTime":"2026-01-21T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.018521 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:46:34.782234832 +0000 UTC Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.106963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.107058 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.107075 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.107098 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.107115 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:41Z","lastTransitionTime":"2026-01-21T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.210136 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.210232 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.210264 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.210298 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.210320 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:41Z","lastTransitionTime":"2026-01-21T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.298515 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.313134 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.313191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.313207 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.313230 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.313248 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:41Z","lastTransitionTime":"2026-01-21T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.337411 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.337491 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.337506 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.337530 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.337583 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:41Z","lastTransitionTime":"2026-01-21T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:41 crc kubenswrapper[4873]: E0121 00:06:41.351823 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.358106 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.358173 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.358190 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.358207 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.358225 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:41Z","lastTransitionTime":"2026-01-21T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:41 crc kubenswrapper[4873]: E0121 00:06:41.377947 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.382485 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.382539 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.382572 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.382590 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.382604 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:41Z","lastTransitionTime":"2026-01-21T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:41 crc kubenswrapper[4873]: E0121 00:06:41.396534 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.403879 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.403929 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.403945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.403968 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.403990 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:41Z","lastTransitionTime":"2026-01-21T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:41 crc kubenswrapper[4873]: E0121 00:06:41.422033 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.426525 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.426604 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.426627 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.426648 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.426665 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:41Z","lastTransitionTime":"2026-01-21T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:41 crc kubenswrapper[4873]: E0121 00:06:41.440033 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: E0121 00:06:41.440271 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.579471 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs"] Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.580304 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.584318 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.586467 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.601761 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.622204 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.640483 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.654630 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.666707 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.683347 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.698666 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.701636 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39f4e415-7fcf-4690-bfd7-00b180992e24-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.701712 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39f4e415-7fcf-4690-bfd7-00b180992e24-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.701744 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbktr\" (UniqueName: \"kubernetes.io/projected/39f4e415-7fcf-4690-bfd7-00b180992e24-kube-api-access-dbktr\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.701838 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39f4e415-7fcf-4690-bfd7-00b180992e24-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.718221 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.747162 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"message\\\":\\\"Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.653416 6186 factory.go:656] Stopping watch factory\\\\nI0121 00:06:38.653429 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 00:06:38.653739 6186 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.653937 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.654083 6186 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654126 6186 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654175 6186 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.761087 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.773605 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.796974 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.802897 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39f4e415-7fcf-4690-bfd7-00b180992e24-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.802950 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39f4e415-7fcf-4690-bfd7-00b180992e24-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.802991 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbktr\" (UniqueName: \"kubernetes.io/projected/39f4e415-7fcf-4690-bfd7-00b180992e24-kube-api-access-dbktr\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.803078 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39f4e415-7fcf-4690-bfd7-00b180992e24-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.812689 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39f4e415-7fcf-4690-bfd7-00b180992e24-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.814039 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39f4e415-7fcf-4690-bfd7-00b180992e24-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.814805 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39f4e415-7fcf-4690-bfd7-00b180992e24-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.828354 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.841525 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.852517 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbktr\" (UniqueName: \"kubernetes.io/projected/39f4e415-7fcf-4690-bfd7-00b180992e24-kube-api-access-dbktr\") pod \"ovnkube-control-plane-749d76644c-nvxhs\" (UID: \"39f4e415-7fcf-4690-bfd7-00b180992e24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.854542 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:41 crc kubenswrapper[4873]: I0121 00:06:41.866462 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:41Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.106966 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.107060 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.107092 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.107118 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.107138 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107235 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107304 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:58.107291193 +0000 UTC m=+50.347158839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107690 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:06:58.107673912 +0000 UTC m=+50.347541558 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107748 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107760 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107771 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107793 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:58.107786635 +0000 UTC m=+50.347654281 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107819 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107838 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:58.107832016 +0000 UTC m=+50.347699662 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107872 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107880 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107887 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.107906 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:58.107901017 +0000 UTC m=+50.347768663 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.136699 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.136752 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:19:25.858535551 +0000 UTC Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.136854 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.136962 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.137120 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.137240 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.137413 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.141033 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.141063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.141071 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.141086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.141096 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:42Z","lastTransitionTime":"2026-01-21T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.158432 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.249757 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.249798 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.249809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.249826 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.249839 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:42Z","lastTransitionTime":"2026-01-21T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.303627 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" event={"ID":"39f4e415-7fcf-4690-bfd7-00b180992e24","Type":"ContainerStarted","Data":"cb344074b58bcc475fc7b49cb1885b461716474af94684fba233571b8ce75842"} Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.306305 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/1.log" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.307477 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/0.log" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.312156 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac" exitCode=1 Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.312181 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac"} Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.312216 4873 scope.go:117] "RemoveContainer" containerID="0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.313165 4873 scope.go:117] "RemoveContainer" containerID="b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac" Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.313358 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.330607 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.351286 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.352743 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.352777 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.352789 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.352809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.352825 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:42Z","lastTransitionTime":"2026-01-21T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.364988 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.377969 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.390415 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.413255 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.424723 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.438182 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.455109 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.455150 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.455161 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.455179 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.455190 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:42Z","lastTransitionTime":"2026-01-21T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.456816 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"message\\\":\\\"Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.653416 6186 factory.go:656] Stopping watch factory\\\\nI0121 00:06:38.653429 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 00:06:38.653739 6186 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.653937 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.654083 6186 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654126 6186 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654175 6186 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-image-registry/node-ca-zc72l in node crc\\\\nI0121 00:06:40.630908 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0121 00:06:40.630749 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0121 00:06:40.630915 6324 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-zc72l after 0 failed attempt(s)\\\\nI0121 00:06:40.630920 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630927 6324 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630930 6324 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 00:06:40.630902 6324 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0121 00:06:40.630958 6324 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0121 00:06:40.630996 6324 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.469219 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.483427 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.498084 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.516972 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.535504 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.549080 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.557752 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.557788 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.557799 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.557815 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.557826 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:42Z","lastTransitionTime":"2026-01-21T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.568676 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.661194 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.661251 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.661268 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.661289 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.661300 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:42Z","lastTransitionTime":"2026-01-21T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.697042 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mx2js"] Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.697512 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.697589 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.718684 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.732234 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.743108 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.754683 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.766427 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.767023 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.767053 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.767079 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.767096 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:42Z","lastTransitionTime":"2026-01-21T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.768179 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.788227 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.807578 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.815229 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbcc\" (UniqueName: \"kubernetes.io/projected/c7f7e62f-ce78-4588-994f-8ab17d7821d1-kube-api-access-txbcc\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.815303 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.826916 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.840236 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.853493 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.868370 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.869341 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.869378 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.869392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.869413 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.869428 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:42Z","lastTransitionTime":"2026-01-21T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.884298 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.900733 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.916385 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txbcc\" (UniqueName: \"kubernetes.io/projected/c7f7e62f-ce78-4588-994f-8ab17d7821d1-kube-api-access-txbcc\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.916465 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.916711 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:42 crc kubenswrapper[4873]: E0121 00:06:42.916824 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs podName:c7f7e62f-ce78-4588-994f-8ab17d7821d1 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:43.416795859 +0000 UTC m=+35.656663525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs") pod "network-metrics-daemon-mx2js" (UID: "c7f7e62f-ce78-4588-994f-8ab17d7821d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.922750 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"message\\\":\\\"Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.653416 6186 factory.go:656] Stopping watch factory\\\\nI0121 00:06:38.653429 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 00:06:38.653739 6186 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.653937 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.654083 6186 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654126 6186 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654175 6186 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-image-registry/node-ca-zc72l in node crc\\\\nI0121 00:06:40.630908 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0121 00:06:40.630749 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0121 00:06:40.630915 6324 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-zc72l after 0 failed attempt(s)\\\\nI0121 00:06:40.630920 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630927 6324 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630930 6324 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 00:06:40.630902 6324 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0121 00:06:40.630958 6324 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0121 00:06:40.630996 6324 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.935296 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.939975 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txbcc\" (UniqueName: \"kubernetes.io/projected/c7f7e62f-ce78-4588-994f-8ab17d7821d1-kube-api-access-txbcc\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.952184 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.969427 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.972075 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.972113 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.972126 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.972146 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:42 crc kubenswrapper[4873]: I0121 00:06:42.972161 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:42Z","lastTransitionTime":"2026-01-21T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.075627 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.075684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.075695 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.075715 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.075729 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:43Z","lastTransitionTime":"2026-01-21T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.137367 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 05:14:40.912593171 +0000 UTC Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.178363 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.178403 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.178411 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.178424 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.178449 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:43Z","lastTransitionTime":"2026-01-21T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.282251 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.282322 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.282358 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.282400 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.282432 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:43Z","lastTransitionTime":"2026-01-21T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.319167 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/1.log" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.325054 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" event={"ID":"39f4e415-7fcf-4690-bfd7-00b180992e24","Type":"ContainerStarted","Data":"ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.325110 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" event={"ID":"39f4e415-7fcf-4690-bfd7-00b180992e24","Type":"ContainerStarted","Data":"152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.339326 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.354201 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.373260 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.386012 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.386058 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.386066 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.386085 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.386095 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:43Z","lastTransitionTime":"2026-01-21T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.402335 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"message\\\":\\\"Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.653416 6186 factory.go:656] Stopping watch factory\\\\nI0121 00:06:38.653429 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 00:06:38.653739 6186 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.653937 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.654083 6186 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654126 6186 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654175 6186 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-image-registry/node-ca-zc72l in node crc\\\\nI0121 00:06:40.630908 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0121 00:06:40.630749 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0121 00:06:40.630915 6324 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-zc72l after 0 failed attempt(s)\\\\nI0121 00:06:40.630920 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630927 6324 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630930 6324 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 00:06:40.630902 6324 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0121 00:06:40.630958 6324 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0121 00:06:40.630996 6324 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.418259 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.420841 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:43 crc kubenswrapper[4873]: E0121 00:06:43.420996 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:43 crc kubenswrapper[4873]: E0121 00:06:43.421105 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs podName:c7f7e62f-ce78-4588-994f-8ab17d7821d1 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:44.421073064 +0000 UTC m=+36.660940750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs") pod "network-metrics-daemon-mx2js" (UID: "c7f7e62f-ce78-4588-994f-8ab17d7821d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.443013 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.457893 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.481092 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.488109 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.488178 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.488195 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.488219 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.488238 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:43Z","lastTransitionTime":"2026-01-21T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.503450 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.520934 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.536835 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.549525 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.571989 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.587719 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.590450 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.590529 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.590584 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.590618 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.590636 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:43Z","lastTransitionTime":"2026-01-21T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.605027 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.618464 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.630773 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:43Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.693778 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.693831 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.693841 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.693859 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.693871 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:43Z","lastTransitionTime":"2026-01-21T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.796531 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.796628 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.796649 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.796675 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.796692 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:43Z","lastTransitionTime":"2026-01-21T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.900699 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.900766 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.900785 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.900811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:43 crc kubenswrapper[4873]: I0121 00:06:43.900831 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:43Z","lastTransitionTime":"2026-01-21T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.005159 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.005274 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.005298 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.005328 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.005350 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:44Z","lastTransitionTime":"2026-01-21T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.063024 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.063141 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:44 crc kubenswrapper[4873]: E0121 00:06:44.063183 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.063252 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:44 crc kubenswrapper[4873]: E0121 00:06:44.063369 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:44 crc kubenswrapper[4873]: E0121 00:06:44.063590 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.108163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.108212 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.108222 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.108237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.108246 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:44Z","lastTransitionTime":"2026-01-21T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.138197 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:06:23.74125937 +0000 UTC Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.211130 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.211193 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.211210 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.211244 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.211268 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:44Z","lastTransitionTime":"2026-01-21T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.314884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.314940 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.314949 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.314963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.314972 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:44Z","lastTransitionTime":"2026-01-21T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.418045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.418126 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.418149 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.418181 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.420048 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:44Z","lastTransitionTime":"2026-01-21T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.430885 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:44 crc kubenswrapper[4873]: E0121 00:06:44.431180 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:44 crc kubenswrapper[4873]: E0121 00:06:44.431296 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs podName:c7f7e62f-ce78-4588-994f-8ab17d7821d1 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:46.431266344 +0000 UTC m=+38.671134030 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs") pod "network-metrics-daemon-mx2js" (UID: "c7f7e62f-ce78-4588-994f-8ab17d7821d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.523986 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.524063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.524086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.524117 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.524141 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:44Z","lastTransitionTime":"2026-01-21T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.626935 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.627016 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.627034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.627058 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.627079 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:44Z","lastTransitionTime":"2026-01-21T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.730241 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.730308 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.730330 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.730366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.730388 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:44Z","lastTransitionTime":"2026-01-21T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.832881 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.832943 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.832953 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.832974 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.832986 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:44Z","lastTransitionTime":"2026-01-21T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.936462 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.936541 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.936603 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.936635 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:44 crc kubenswrapper[4873]: I0121 00:06:44.936656 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:44Z","lastTransitionTime":"2026-01-21T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.040125 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.040183 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.040199 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.040222 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.040241 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:45Z","lastTransitionTime":"2026-01-21T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.063029 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:45 crc kubenswrapper[4873]: E0121 00:06:45.063275 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.138823 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:10:44.767825587 +0000 UTC Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.143392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.143468 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.143488 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.143510 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.143526 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:45Z","lastTransitionTime":"2026-01-21T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.246835 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.246869 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.246877 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.246892 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.246902 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:45Z","lastTransitionTime":"2026-01-21T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.348930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.349023 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.349057 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.349094 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.349112 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:45Z","lastTransitionTime":"2026-01-21T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.453161 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.453273 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.453311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.453347 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.453382 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:45Z","lastTransitionTime":"2026-01-21T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.556321 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.556413 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.556475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.556501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.556517 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:45Z","lastTransitionTime":"2026-01-21T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.660016 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.660083 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.660103 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.660129 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.660148 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:45Z","lastTransitionTime":"2026-01-21T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.763487 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.763538 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.763592 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.763622 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.763644 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:45Z","lastTransitionTime":"2026-01-21T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.867116 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.867742 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.867828 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.868252 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.868329 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:45Z","lastTransitionTime":"2026-01-21T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.972825 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.972872 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.972884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.972902 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:45 crc kubenswrapper[4873]: I0121 00:06:45.972917 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:45Z","lastTransitionTime":"2026-01-21T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.063180 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.063250 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.063317 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:46 crc kubenswrapper[4873]: E0121 00:06:46.063469 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:46 crc kubenswrapper[4873]: E0121 00:06:46.063698 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:46 crc kubenswrapper[4873]: E0121 00:06:46.063785 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.076659 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.076721 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.076738 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.076765 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.076783 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:46Z","lastTransitionTime":"2026-01-21T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.139635 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:32:31.402584236 +0000 UTC Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.179717 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.179770 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.179786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.179810 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.179826 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:46Z","lastTransitionTime":"2026-01-21T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.283045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.283101 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.283118 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.283140 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.283188 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:46Z","lastTransitionTime":"2026-01-21T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.385062 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.385099 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.385107 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.385120 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.385131 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:46Z","lastTransitionTime":"2026-01-21T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.451740 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:46 crc kubenswrapper[4873]: E0121 00:06:46.451916 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:46 crc kubenswrapper[4873]: E0121 00:06:46.452062 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs podName:c7f7e62f-ce78-4588-994f-8ab17d7821d1 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:50.452046811 +0000 UTC m=+42.691914457 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs") pod "network-metrics-daemon-mx2js" (UID: "c7f7e62f-ce78-4588-994f-8ab17d7821d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.487658 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.487748 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.487765 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.487794 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.487811 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:46Z","lastTransitionTime":"2026-01-21T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.590721 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.591185 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.591406 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.591574 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.591714 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:46Z","lastTransitionTime":"2026-01-21T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.694617 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.694951 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.695141 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.695281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.695434 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:46Z","lastTransitionTime":"2026-01-21T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.799397 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.799464 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.799501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.799541 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.799668 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:46Z","lastTransitionTime":"2026-01-21T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.902792 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.902891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.902909 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.902932 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:46 crc kubenswrapper[4873]: I0121 00:06:46.902950 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:46Z","lastTransitionTime":"2026-01-21T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.006066 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.006128 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.006139 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.006154 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.006165 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:47Z","lastTransitionTime":"2026-01-21T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.063397 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:47 crc kubenswrapper[4873]: E0121 00:06:47.063567 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.109820 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.109878 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.109895 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.109911 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.109921 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:47Z","lastTransitionTime":"2026-01-21T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.140503 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:09:45.830774896 +0000 UTC Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.213592 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.213660 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.213679 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.213703 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.213722 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:47Z","lastTransitionTime":"2026-01-21T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.316209 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.316276 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.316304 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.316334 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.316358 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:47Z","lastTransitionTime":"2026-01-21T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.419032 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.419112 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.419126 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.419144 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.419156 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:47Z","lastTransitionTime":"2026-01-21T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.522527 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.522638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.522657 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.522681 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.522700 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:47Z","lastTransitionTime":"2026-01-21T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.626199 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.626272 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.626295 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.626327 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.626351 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:47Z","lastTransitionTime":"2026-01-21T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.735997 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.736096 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.736114 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.736139 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.736155 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:47Z","lastTransitionTime":"2026-01-21T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.838861 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.838931 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.838950 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.838974 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.838991 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:47Z","lastTransitionTime":"2026-01-21T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.942442 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.942509 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.942544 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.942613 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:47 crc kubenswrapper[4873]: I0121 00:06:47.942637 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:47Z","lastTransitionTime":"2026-01-21T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.046384 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.046444 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.046457 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.046479 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.046496 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:48Z","lastTransitionTime":"2026-01-21T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.063084 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.063160 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:48 crc kubenswrapper[4873]: E0121 00:06:48.063215 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.063098 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:48 crc kubenswrapper[4873]: E0121 00:06:48.063373 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:48 crc kubenswrapper[4873]: E0121 00:06:48.063585 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.081102 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.102386 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.120107 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.136191 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.141350 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:57:48.362802825 +0000 UTC Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.148824 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.148930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.148949 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.148974 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.148994 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:48Z","lastTransitionTime":"2026-01-21T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.159841 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.174723 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.192211 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.216756 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.230770 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.251719 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.251778 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.251795 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.251818 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.251835 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:48Z","lastTransitionTime":"2026-01-21T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.255915 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.269064 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.287414 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.302691 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.316510 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.331559 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.354205 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.354244 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.354252 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.354265 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.354275 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:48Z","lastTransitionTime":"2026-01-21T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.362204 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0477a90be9db4b293a1bc3cd46da0bb0ec1c9d1080b75b2af31cf67db8f5acad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"message\\\":\\\"Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.653416 6186 factory.go:656] Stopping watch factory\\\\nI0121 00:06:38.653429 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 00:06:38.653739 6186 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.653937 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 00:06:38.654083 6186 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654126 6186 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 00:06:38.654175 6186 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-image-registry/node-ca-zc72l in node crc\\\\nI0121 00:06:40.630908 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0121 00:06:40.630749 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0121 00:06:40.630915 6324 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-zc72l after 0 failed attempt(s)\\\\nI0121 00:06:40.630920 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630927 6324 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630930 6324 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 00:06:40.630902 6324 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0121 00:06:40.630958 6324 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0121 00:06:40.630996 6324 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.371238 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:48Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.456832 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.456872 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.456885 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.456901 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.456913 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:48Z","lastTransitionTime":"2026-01-21T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.560035 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.560109 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.560134 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.560168 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.560195 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:48Z","lastTransitionTime":"2026-01-21T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.662923 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.662997 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.663016 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.663041 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.663060 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:48Z","lastTransitionTime":"2026-01-21T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.765962 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.766051 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.766085 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.766118 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.766139 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:48Z","lastTransitionTime":"2026-01-21T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.869522 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.869609 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.869624 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.869646 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.869660 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:48Z","lastTransitionTime":"2026-01-21T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.973693 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.973760 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.973781 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.973809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:48 crc kubenswrapper[4873]: I0121 00:06:48.973828 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:48Z","lastTransitionTime":"2026-01-21T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.063504 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:49 crc kubenswrapper[4873]: E0121 00:06:49.063759 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.078134 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.078214 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.078238 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.078267 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.078285 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:49Z","lastTransitionTime":"2026-01-21T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.142217 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:54:02.207115586 +0000 UTC Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.181430 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.181486 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.181507 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.181533 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.181584 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:49Z","lastTransitionTime":"2026-01-21T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.285052 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.285462 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.285656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.285803 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.285957 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:49Z","lastTransitionTime":"2026-01-21T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.389254 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.389308 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.389325 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.389351 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.389371 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:49Z","lastTransitionTime":"2026-01-21T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.492916 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.492984 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.493006 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.493037 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.493057 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:49Z","lastTransitionTime":"2026-01-21T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.595837 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.595918 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.595944 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.595969 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.595987 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:49Z","lastTransitionTime":"2026-01-21T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.699076 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.699161 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.699186 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.699217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.699239 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:49Z","lastTransitionTime":"2026-01-21T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.801881 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.801958 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.801978 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.802010 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.802027 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:49Z","lastTransitionTime":"2026-01-21T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.904993 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.905115 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.905142 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.905179 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:49 crc kubenswrapper[4873]: I0121 00:06:49.905201 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:49Z","lastTransitionTime":"2026-01-21T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.008995 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.009076 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.009092 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.009297 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.009310 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:50Z","lastTransitionTime":"2026-01-21T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.062650 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.062770 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:50 crc kubenswrapper[4873]: E0121 00:06:50.062830 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.062698 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:50 crc kubenswrapper[4873]: E0121 00:06:50.062982 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:50 crc kubenswrapper[4873]: E0121 00:06:50.063212 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.112007 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.112079 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.112104 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.112134 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.112158 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:50Z","lastTransitionTime":"2026-01-21T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.142695 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:19:01.159100073 +0000 UTC Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.215238 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.215306 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.215324 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.215364 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.215379 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:50Z","lastTransitionTime":"2026-01-21T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.318406 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.318467 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.318484 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.318508 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.318524 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:50Z","lastTransitionTime":"2026-01-21T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.421150 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.421217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.421239 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.421268 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.421289 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:50Z","lastTransitionTime":"2026-01-21T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.497667 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:50 crc kubenswrapper[4873]: E0121 00:06:50.497824 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:50 crc kubenswrapper[4873]: E0121 00:06:50.497921 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs podName:c7f7e62f-ce78-4588-994f-8ab17d7821d1 nodeName:}" failed. No retries permitted until 2026-01-21 00:06:58.497894628 +0000 UTC m=+50.737762304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs") pod "network-metrics-daemon-mx2js" (UID: "c7f7e62f-ce78-4588-994f-8ab17d7821d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.524424 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.524478 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.524491 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.524511 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.524523 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:50Z","lastTransitionTime":"2026-01-21T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.627438 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.627624 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.627657 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.627687 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.627712 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:50Z","lastTransitionTime":"2026-01-21T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.730821 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.730880 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.730894 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.730915 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.730929 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:50Z","lastTransitionTime":"2026-01-21T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.834175 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.834266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.834281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.834308 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.834332 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:50Z","lastTransitionTime":"2026-01-21T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.937815 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.937858 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.937866 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.937880 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:50 crc kubenswrapper[4873]: I0121 00:06:50.937888 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:50Z","lastTransitionTime":"2026-01-21T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.040915 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.040956 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.040964 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.040978 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.040987 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.063329 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:51 crc kubenswrapper[4873]: E0121 00:06:51.063592 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.142880 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:40:04.643407324 +0000 UTC Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.143335 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.143383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.143403 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.143429 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.143446 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.247206 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.247260 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.247272 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.247293 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.247305 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.350102 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.350187 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.350218 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.350249 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.350272 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.452823 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.452880 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.452888 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.452903 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.452911 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.556283 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.556348 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.556371 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.556399 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.556417 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.576478 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.576537 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.576601 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.576622 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.576639 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: E0121 00:06:51.597226 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.602417 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.602470 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.602488 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.602511 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.602528 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: E0121 00:06:51.624268 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.630375 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.630464 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.630484 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.630508 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.630526 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: E0121 00:06:51.649574 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.655296 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.655365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.655383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.655409 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.655426 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: E0121 00:06:51.676912 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.682716 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.682763 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.682788 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.682817 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.682843 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: E0121 00:06:51.704739 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:51 crc kubenswrapper[4873]: E0121 00:06:51.704988 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.707380 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.707439 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.707463 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.707493 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.707520 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.810136 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.810197 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.810212 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.810229 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.810242 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.913392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.913457 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.913474 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.913538 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:51 crc kubenswrapper[4873]: I0121 00:06:51.913594 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:51Z","lastTransitionTime":"2026-01-21T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.016490 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.016603 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.016621 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.016646 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.016664 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:52Z","lastTransitionTime":"2026-01-21T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.063640 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.063796 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:52 crc kubenswrapper[4873]: E0121 00:06:52.063926 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.063946 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:52 crc kubenswrapper[4873]: E0121 00:06:52.064087 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:52 crc kubenswrapper[4873]: E0121 00:06:52.064274 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.120124 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.120198 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.120215 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.120242 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.120259 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:52Z","lastTransitionTime":"2026-01-21T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.143488 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:47:15.343265491 +0000 UTC Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.223508 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.223587 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.223603 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.223629 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.223643 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:52Z","lastTransitionTime":"2026-01-21T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.327055 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.327115 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.327135 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.327159 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.327178 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:52Z","lastTransitionTime":"2026-01-21T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.429621 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.429683 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.429699 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.429723 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.429739 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:52Z","lastTransitionTime":"2026-01-21T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.533854 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.533915 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.533931 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.533955 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.533972 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:52Z","lastTransitionTime":"2026-01-21T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.637047 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.637125 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.637143 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.637168 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.637187 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:52Z","lastTransitionTime":"2026-01-21T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.740779 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.740848 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.740867 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.740891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.740907 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:52Z","lastTransitionTime":"2026-01-21T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.844264 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.844334 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.844353 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.844381 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.844399 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:52Z","lastTransitionTime":"2026-01-21T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.947388 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.947473 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.947496 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.947524 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:52 crc kubenswrapper[4873]: I0121 00:06:52.947542 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:52Z","lastTransitionTime":"2026-01-21T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.051401 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.051443 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.051455 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.051474 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.051491 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:53Z","lastTransitionTime":"2026-01-21T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.062903 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:53 crc kubenswrapper[4873]: E0121 00:06:53.063145 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.143839 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:28:56.4686031 +0000 UTC Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.154007 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.154071 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.154089 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.154112 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.154130 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:53Z","lastTransitionTime":"2026-01-21T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.257130 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.257191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.257208 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.257234 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.257252 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:53Z","lastTransitionTime":"2026-01-21T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.358752 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.358792 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.358802 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.358816 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.358824 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:53Z","lastTransitionTime":"2026-01-21T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.461715 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.461761 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.461771 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.461788 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.461801 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:53Z","lastTransitionTime":"2026-01-21T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.565208 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.565285 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.565309 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.565335 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.565352 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:53Z","lastTransitionTime":"2026-01-21T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.668987 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.669036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.669044 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.669079 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.669090 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:53Z","lastTransitionTime":"2026-01-21T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.772404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.772476 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.772497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.772525 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.772578 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:53Z","lastTransitionTime":"2026-01-21T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.875786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.875857 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.875875 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.875899 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.875916 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:53Z","lastTransitionTime":"2026-01-21T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.979420 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.979479 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.979497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.979520 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:53 crc kubenswrapper[4873]: I0121 00:06:53.979537 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:53Z","lastTransitionTime":"2026-01-21T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.062819 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.062880 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.062834 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:54 crc kubenswrapper[4873]: E0121 00:06:54.063017 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:54 crc kubenswrapper[4873]: E0121 00:06:54.063080 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:54 crc kubenswrapper[4873]: E0121 00:06:54.063168 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.064129 4873 scope.go:117] "RemoveContainer" containerID="b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.082981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.083027 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.083037 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.083054 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.083075 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:54Z","lastTransitionTime":"2026-01-21T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.086200 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.104163 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.119955 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.135229 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.144664 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:15:35.474642819 +0000 UTC Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.151684 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.162193 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.177236 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.185194 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.185281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.185300 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.185321 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.185347 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:54Z","lastTransitionTime":"2026-01-21T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.196838 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.231760 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-image-registry/node-ca-zc72l in node crc\\\\nI0121 00:06:40.630908 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0121 00:06:40.630749 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0121 00:06:40.630915 6324 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-zc72l after 0 failed attempt(s)\\\\nI0121 00:06:40.630920 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630927 6324 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630930 6324 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 00:06:40.630902 6324 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0121 00:06:40.630958 6324 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0121 00:06:40.630996 6324 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.256787 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.272702 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.287297 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.287320 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.287328 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.287342 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.287350 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:54Z","lastTransitionTime":"2026-01-21T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.294864 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.305383 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.325419 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.341012 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.355880 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.362917 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/1.log" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.366330 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8"} Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.366496 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.370355 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.389891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.389945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.389964 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.389987 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.389999 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:54Z","lastTransitionTime":"2026-01-21T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.391740 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.406087 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.420596 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.434466 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.450575 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.470920 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.488610 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.493380 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.493426 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.493444 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.493471 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.493495 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:54Z","lastTransitionTime":"2026-01-21T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.506794 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.530056 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.549963 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.562150 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.585038 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.596620 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.596691 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.596704 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.596721 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.596731 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:54Z","lastTransitionTime":"2026-01-21T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.600204 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.617887 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-image-registry/node-ca-zc72l in node crc\\\\nI0121 00:06:40.630908 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0121 00:06:40.630749 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0121 00:06:40.630915 6324 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-zc72l after 0 failed attempt(s)\\\\nI0121 00:06:40.630920 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630927 6324 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630930 6324 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 00:06:40.630902 6324 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0121 00:06:40.630958 6324 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0121 00:06:40.630996 6324 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.633507 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.649244 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.665757 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.699716 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.699790 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.699806 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.699832 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.699850 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:54Z","lastTransitionTime":"2026-01-21T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.802980 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.803027 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.803036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.803052 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.803064 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:54Z","lastTransitionTime":"2026-01-21T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.905193 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.905260 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.905271 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.905287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:54 crc kubenswrapper[4873]: I0121 00:06:54.905298 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:54Z","lastTransitionTime":"2026-01-21T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.009619 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.009678 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.009694 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.009717 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.009735 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:55Z","lastTransitionTime":"2026-01-21T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.062701 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:55 crc kubenswrapper[4873]: E0121 00:06:55.062888 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.112793 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.112852 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.112868 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.112892 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.112919 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:55Z","lastTransitionTime":"2026-01-21T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.145229 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:37:29.939597555 +0000 UTC Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.216055 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.216105 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.216116 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.216133 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.216146 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:55Z","lastTransitionTime":"2026-01-21T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.319071 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.319129 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.319141 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.319155 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.319165 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:55Z","lastTransitionTime":"2026-01-21T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.374531 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/2.log" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.375927 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/1.log" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.382409 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8" exitCode=1 Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.382515 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.382714 4873 scope.go:117] "RemoveContainer" containerID="b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.383866 4873 scope.go:117] "RemoveContainer" containerID="c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8" Jan 21 00:06:55 crc kubenswrapper[4873]: E0121 00:06:55.384182 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.397115 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.410802 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.421400 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.422055 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.422105 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.422121 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.422146 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.422162 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:55Z","lastTransitionTime":"2026-01-21T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.435073 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.448418 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.459781 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.474670 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.489495 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.517303 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b223b6a5f577d45f5053de5dd212867a3fca9752331b1bc355be6c5440ce1cac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-image-registry/node-ca-zc72l in node crc\\\\nI0121 00:06:40.630908 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0121 00:06:40.630749 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0121 00:06:40.630915 6324 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-zc72l after 0 failed attempt(s)\\\\nI0121 00:06:40.630920 6324 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630927 6324 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-nhxdr\\\\nI0121 00:06:40.630930 6324 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 00:06:40.630902 6324 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0121 00:06:40.630958 6324 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0121 00:06:40.630996 6324 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:55Z\\\",\\\"message\\\":\\\"ller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 00:06:54.962924 6519 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.524346 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.524373 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.524381 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.524395 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.524407 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:55Z","lastTransitionTime":"2026-01-21T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.533570 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.553718 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.570360 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.597161 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.616517 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.626684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.626747 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.626759 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.626776 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.626786 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:55Z","lastTransitionTime":"2026-01-21T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.636296 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.651250 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.669652 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:55Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.729606 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.729663 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.729674 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.729689 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.729699 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:55Z","lastTransitionTime":"2026-01-21T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.832690 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.832748 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.832762 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.832782 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.832798 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:55Z","lastTransitionTime":"2026-01-21T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.935842 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.935910 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.935929 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.935955 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:55 crc kubenswrapper[4873]: I0121 00:06:55.935976 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:55Z","lastTransitionTime":"2026-01-21T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.038957 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.039015 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.039034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.039059 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.039081 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:56Z","lastTransitionTime":"2026-01-21T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.062415 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:56 crc kubenswrapper[4873]: E0121 00:06:56.062609 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.062885 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.062951 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:56 crc kubenswrapper[4873]: E0121 00:06:56.063176 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:56 crc kubenswrapper[4873]: E0121 00:06:56.063292 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.142266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.142336 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.142355 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.142384 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.142403 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:56Z","lastTransitionTime":"2026-01-21T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.145967 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:49:03.431310327 +0000 UTC Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.245439 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.245509 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.245527 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.245583 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.245602 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:56Z","lastTransitionTime":"2026-01-21T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.348659 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.348732 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.348750 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.348775 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.348796 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:56Z","lastTransitionTime":"2026-01-21T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.389320 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/2.log" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.451450 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.451493 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.451504 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.451522 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.451533 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:56Z","lastTransitionTime":"2026-01-21T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.554302 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.554388 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.554409 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.554433 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.554452 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:56Z","lastTransitionTime":"2026-01-21T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.662313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.662382 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.662401 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.662425 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.662444 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:56Z","lastTransitionTime":"2026-01-21T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.766086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.766148 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.766165 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.766189 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.766206 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:56Z","lastTransitionTime":"2026-01-21T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.869034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.869109 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.869144 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.869176 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.869201 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:56Z","lastTransitionTime":"2026-01-21T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.972384 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.972472 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.972493 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.972523 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:56 crc kubenswrapper[4873]: I0121 00:06:56.972545 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:56Z","lastTransitionTime":"2026-01-21T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.062912 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:57 crc kubenswrapper[4873]: E0121 00:06:57.063141 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.076246 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.076311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.076323 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.076343 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.076357 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:57Z","lastTransitionTime":"2026-01-21T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.146995 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:47:16.650516796 +0000 UTC Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.179598 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.179689 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.179714 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.179742 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.179762 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:57Z","lastTransitionTime":"2026-01-21T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.282918 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.282988 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.283007 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.283032 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.283053 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:57Z","lastTransitionTime":"2026-01-21T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.385966 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.386017 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.386030 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.386051 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.386066 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:57Z","lastTransitionTime":"2026-01-21T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.488730 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.488809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.488835 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.488865 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.488889 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:57Z","lastTransitionTime":"2026-01-21T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.591701 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.591787 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.591805 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.591832 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.591850 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:57Z","lastTransitionTime":"2026-01-21T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.694201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.694266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.694277 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.694319 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.694335 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:57Z","lastTransitionTime":"2026-01-21T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.797809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.797872 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.797894 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.797922 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.797944 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:57Z","lastTransitionTime":"2026-01-21T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.900728 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.900786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.900804 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.900828 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:57 crc kubenswrapper[4873]: I0121 00:06:57.900844 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:57Z","lastTransitionTime":"2026-01-21T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.003933 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.003995 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.004014 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.004038 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.004056 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:58Z","lastTransitionTime":"2026-01-21T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.017285 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.018643 4873 scope.go:117] "RemoveContainer" containerID="c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8" Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.018911 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.036536 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.060285 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.063529 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.063661 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.063661 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.063782 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.063961 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.064209 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.095686 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:55Z\\\",\\\"message\\\":\\\"ller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 00:06:54.962924 6519 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.106138 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.106172 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.106180 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.106195 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.106206 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:58Z","lastTransitionTime":"2026-01-21T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.111396 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.131383 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.143250 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.147210 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:10:14.402324158 +0000 UTC Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.160831 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.172909 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.183765 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.183888 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.183920 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.183945 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.183964 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:07:30.183942942 +0000 UTC m=+82.423810588 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.184007 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184055 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184074 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184089 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184096 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184114 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184158 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184181 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184127 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 00:07:30.184113986 +0000 UTC m=+82.423981632 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184280 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:07:30.1842547 +0000 UTC m=+82.424122386 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184311 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 00:07:30.184295581 +0000 UTC m=+82.424163367 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184299 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.184489 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:07:30.184446455 +0000 UTC m=+82.424314271 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.185893 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.199423 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.208679 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.208714 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.208725 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.208743 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.208755 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:58Z","lastTransitionTime":"2026-01-21T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.211183 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.243659 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.263871 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.278774 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.292198 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.310707 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.310779 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.310794 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.310814 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.310853 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:58Z","lastTransitionTime":"2026-01-21T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.316729 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.334468 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.349998 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.363518 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.379966 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.395492 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.413188 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.413231 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.413246 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.413275 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.413294 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:58Z","lastTransitionTime":"2026-01-21T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.420183 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.436282 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.447375 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.464449 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.497852 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:55Z\\\",\\\"message\\\":\\\"ller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 00:06:54.962924 6519 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.515684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.515726 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.515742 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.515762 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.515774 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:58Z","lastTransitionTime":"2026-01-21T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.519446 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.539461 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.557349 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.580180 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.587701 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.587847 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:58 crc kubenswrapper[4873]: E0121 00:06:58.587899 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs podName:c7f7e62f-ce78-4588-994f-8ab17d7821d1 nodeName:}" failed. No retries permitted until 2026-01-21 00:07:14.587882352 +0000 UTC m=+66.827750008 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs") pod "network-metrics-daemon-mx2js" (UID: "c7f7e62f-ce78-4588-994f-8ab17d7821d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.594747 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.606072 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.617345 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.617379 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.617389 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.617404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.617415 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:58Z","lastTransitionTime":"2026-01-21T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.618131 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.631042 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:58Z is after 2025-08-24T17:21:41Z" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.720352 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.720385 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.720393 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.720406 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.720414 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:58Z","lastTransitionTime":"2026-01-21T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.822852 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.822906 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.822925 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.822947 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.822966 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:58Z","lastTransitionTime":"2026-01-21T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.925756 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.925833 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.925856 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.925885 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:58 crc kubenswrapper[4873]: I0121 00:06:58.925907 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:58Z","lastTransitionTime":"2026-01-21T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.028727 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.028807 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.028830 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.028861 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.028883 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:59Z","lastTransitionTime":"2026-01-21T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.063383 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:06:59 crc kubenswrapper[4873]: E0121 00:06:59.063530 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.136584 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.136646 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.136660 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.136678 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.136691 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:59Z","lastTransitionTime":"2026-01-21T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.147389 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:45:50.504269408 +0000 UTC Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.239622 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.239659 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.239667 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.239680 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.239689 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:59Z","lastTransitionTime":"2026-01-21T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.342497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.342591 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.342609 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.342633 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.342653 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:59Z","lastTransitionTime":"2026-01-21T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.445795 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.445868 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.445891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.445921 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.445945 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:59Z","lastTransitionTime":"2026-01-21T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.548732 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.548792 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.548803 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.548821 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.548834 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:59Z","lastTransitionTime":"2026-01-21T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.652398 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.652477 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.652497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.652521 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.652579 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:59Z","lastTransitionTime":"2026-01-21T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.755211 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.755339 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.755358 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.755380 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.755614 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:59Z","lastTransitionTime":"2026-01-21T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.860453 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.860862 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.861086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.861329 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.861546 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:59Z","lastTransitionTime":"2026-01-21T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.965498 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.965833 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.966104 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.966394 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:06:59 crc kubenswrapper[4873]: I0121 00:06:59.966800 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:06:59Z","lastTransitionTime":"2026-01-21T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.062750 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.062764 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:00 crc kubenswrapper[4873]: E0121 00:07:00.063230 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:00 crc kubenswrapper[4873]: E0121 00:07:00.063392 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.062834 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:00 crc kubenswrapper[4873]: E0121 00:07:00.063937 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.069373 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.069431 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.069448 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.069471 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.069498 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:00Z","lastTransitionTime":"2026-01-21T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.148235 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:19:48.225808911 +0000 UTC Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.171955 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.172035 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.172051 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.172069 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.172084 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:00Z","lastTransitionTime":"2026-01-21T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.275433 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.275825 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.275954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.276116 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.276306 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:00Z","lastTransitionTime":"2026-01-21T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.382050 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.382146 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.382176 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.382216 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.382253 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:00Z","lastTransitionTime":"2026-01-21T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.486037 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.486615 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.486638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.486663 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.486680 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:00Z","lastTransitionTime":"2026-01-21T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.589404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.589472 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.589490 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.589513 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.589531 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:00Z","lastTransitionTime":"2026-01-21T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.692681 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.692742 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.692758 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.692780 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.692797 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:00Z","lastTransitionTime":"2026-01-21T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.795929 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.796027 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.796051 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.796127 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.796148 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:00Z","lastTransitionTime":"2026-01-21T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.899420 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.899488 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.899510 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.899606 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:00 crc kubenswrapper[4873]: I0121 00:07:00.899643 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:00Z","lastTransitionTime":"2026-01-21T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.003238 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.003302 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.003319 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.003345 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.003369 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.063086 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:01 crc kubenswrapper[4873]: E0121 00:07:01.063265 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.105399 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.105494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.105517 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.105544 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.105614 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.149177 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:40:52.758816749 +0000 UTC Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.208421 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.208501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.208525 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.208600 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.208628 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.311625 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.311662 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.311670 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.311684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.311696 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.414769 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.414836 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.414853 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.414876 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.414893 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.518237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.518320 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.518354 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.518384 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.518406 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.621136 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.621173 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.621182 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.621198 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.621209 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.723797 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.723863 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.723884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.723912 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.723936 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.826763 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.826852 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.826877 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.826908 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.826931 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.856635 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.856698 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.856714 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.856737 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.856754 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: E0121 00:07:01.876832 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:01Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.881624 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.881669 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.881684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.881704 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.881719 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: E0121 00:07:01.898445 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:01Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.902924 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.902985 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.903002 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.903028 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.903045 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: E0121 00:07:01.923016 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:01Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.927463 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.927524 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.927542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.927600 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.927618 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: E0121 00:07:01.947960 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:01Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.951819 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.951867 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.951879 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.951898 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.951913 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:01 crc kubenswrapper[4873]: E0121 00:07:01.971822 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:01Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:01 crc kubenswrapper[4873]: E0121 00:07:01.972174 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.974404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.974485 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.974510 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.974543 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:01 crc kubenswrapper[4873]: I0121 00:07:01.974616 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:01Z","lastTransitionTime":"2026-01-21T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.062903 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.063004 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.063052 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:02 crc kubenswrapper[4873]: E0121 00:07:02.063166 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:02 crc kubenswrapper[4873]: E0121 00:07:02.063261 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:02 crc kubenswrapper[4873]: E0121 00:07:02.063618 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.077010 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.077064 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.077087 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.077116 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.077145 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:02Z","lastTransitionTime":"2026-01-21T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.149853 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:16:44.663596754 +0000 UTC Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.180697 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.180761 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.180778 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.180802 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.180819 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:02Z","lastTransitionTime":"2026-01-21T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.283656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.283695 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.283705 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.283724 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.283742 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:02Z","lastTransitionTime":"2026-01-21T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.387036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.387168 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.387205 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.387239 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.387265 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:02Z","lastTransitionTime":"2026-01-21T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.490038 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.490098 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.490115 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.490145 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.490166 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:02Z","lastTransitionTime":"2026-01-21T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.594034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.594112 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.594130 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.594154 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.594173 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:02Z","lastTransitionTime":"2026-01-21T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.697629 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.697713 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.697739 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.697770 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.697796 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:02Z","lastTransitionTime":"2026-01-21T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.801150 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.801200 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.801215 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.801233 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.801246 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:02Z","lastTransitionTime":"2026-01-21T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.868233 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.882774 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.886530 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.904318 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.904376 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.904393 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.904421 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.904438 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:02Z","lastTransitionTime":"2026-01-21T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.914369 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.935171 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.968795 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:02 crc kubenswrapper[4873]: I0121 00:07:02.990736 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.007335 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.007412 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.007430 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.007455 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.007473 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:03Z","lastTransitionTime":"2026-01-21T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.018004 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.037283 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.054015 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.062655 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:03 crc kubenswrapper[4873]: E0121 00:07:03.062800 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.072650 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.087518 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.101585 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.110317 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.110349 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.110358 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.110371 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.110380 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:03Z","lastTransitionTime":"2026-01-21T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.119833 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.141935 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.150456 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:37:47.202725879 +0000 UTC Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.157138 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.171030 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.192865 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.214113 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.214158 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.214174 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.214199 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.214216 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:03Z","lastTransitionTime":"2026-01-21T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.224325 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:55Z\\\",\\\"message\\\":\\\"ller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 00:06:54.962924 6519 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:03Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.317121 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.317202 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.317226 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.317259 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.317283 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:03Z","lastTransitionTime":"2026-01-21T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.419829 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.419895 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.419911 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.419935 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.419952 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:03Z","lastTransitionTime":"2026-01-21T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.523347 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.523441 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.523466 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.523495 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.523513 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:03Z","lastTransitionTime":"2026-01-21T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.626723 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.626805 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.626826 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.626850 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.626867 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:03Z","lastTransitionTime":"2026-01-21T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.729946 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.730021 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.730076 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.730109 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.730132 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:03Z","lastTransitionTime":"2026-01-21T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.833535 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.833642 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.833665 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.833725 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.833749 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:03Z","lastTransitionTime":"2026-01-21T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.936527 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.936655 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.936690 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.936723 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:03 crc kubenswrapper[4873]: I0121 00:07:03.936744 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:03Z","lastTransitionTime":"2026-01-21T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.039654 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.039742 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.039755 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.039778 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.039794 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:04Z","lastTransitionTime":"2026-01-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.063301 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.063360 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.063485 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:04 crc kubenswrapper[4873]: E0121 00:07:04.063718 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:04 crc kubenswrapper[4873]: E0121 00:07:04.063859 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:04 crc kubenswrapper[4873]: E0121 00:07:04.064075 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.143391 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.143466 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.143491 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.143632 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.143678 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:04Z","lastTransitionTime":"2026-01-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.150709 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:26:47.805919899 +0000 UTC Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.246711 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.246778 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.246790 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.246811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.246823 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:04Z","lastTransitionTime":"2026-01-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.349835 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.349901 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.349917 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.349944 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.349962 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:04Z","lastTransitionTime":"2026-01-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.452468 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.452516 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.452528 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.452565 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.452577 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:04Z","lastTransitionTime":"2026-01-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.555164 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.555211 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.555224 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.555242 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.555256 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:04Z","lastTransitionTime":"2026-01-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.656920 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.656974 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.656985 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.657006 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.657019 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:04Z","lastTransitionTime":"2026-01-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.759246 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.759287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.759295 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.759313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.759323 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:04Z","lastTransitionTime":"2026-01-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.862399 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.862444 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.862455 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.862471 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.862483 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:04Z","lastTransitionTime":"2026-01-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.965348 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.965403 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.965419 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.965442 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:04 crc kubenswrapper[4873]: I0121 00:07:04.965459 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:04Z","lastTransitionTime":"2026-01-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.063263 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:05 crc kubenswrapper[4873]: E0121 00:07:05.063479 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.068720 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.068789 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.068809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.068833 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.068850 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:05Z","lastTransitionTime":"2026-01-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.151881 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:20:42.377676978 +0000 UTC Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.172462 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.172529 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.172545 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.172614 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.172637 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:05Z","lastTransitionTime":"2026-01-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.275903 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.276048 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.276135 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.276171 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.276194 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:05Z","lastTransitionTime":"2026-01-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.379229 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.379291 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.379308 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.379334 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.379351 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:05Z","lastTransitionTime":"2026-01-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.482375 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.482422 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.482430 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.482445 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.482454 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:05Z","lastTransitionTime":"2026-01-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.585021 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.585094 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.585112 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.585137 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.585154 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:05Z","lastTransitionTime":"2026-01-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.687540 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.687637 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.687655 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.687679 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.687697 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:05Z","lastTransitionTime":"2026-01-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.790619 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.790711 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.790731 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.790755 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.790773 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:05Z","lastTransitionTime":"2026-01-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.893958 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.894007 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.894018 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.894034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.894046 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:05Z","lastTransitionTime":"2026-01-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.996870 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.996921 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.996933 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.996952 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:05 crc kubenswrapper[4873]: I0121 00:07:05.996969 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:05Z","lastTransitionTime":"2026-01-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.064512 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.064660 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:06 crc kubenswrapper[4873]: E0121 00:07:06.064744 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.064770 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:06 crc kubenswrapper[4873]: E0121 00:07:06.064905 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:06 crc kubenswrapper[4873]: E0121 00:07:06.065029 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.099682 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.099733 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.099747 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.099763 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.099785 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:06Z","lastTransitionTime":"2026-01-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.183068 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:03:57.207006801 +0000 UTC Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.201488 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.201518 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.201528 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.201543 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.201572 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:06Z","lastTransitionTime":"2026-01-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.303774 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.303841 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.303866 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.303892 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.303911 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:06Z","lastTransitionTime":"2026-01-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.407260 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.407325 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.407342 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.407366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.407384 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:06Z","lastTransitionTime":"2026-01-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.510338 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.510390 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.510417 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.510435 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.510444 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:06Z","lastTransitionTime":"2026-01-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.613524 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.613638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.613665 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.613694 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.613713 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:06Z","lastTransitionTime":"2026-01-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.717112 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.717176 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.717195 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.717240 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.717255 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:06Z","lastTransitionTime":"2026-01-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.821405 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.821476 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.821495 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.821520 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.821536 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:06Z","lastTransitionTime":"2026-01-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.925157 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.925220 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.925243 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.925273 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:06 crc kubenswrapper[4873]: I0121 00:07:06.925294 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:06Z","lastTransitionTime":"2026-01-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.028067 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.028142 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.028163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.028191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.028211 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:07Z","lastTransitionTime":"2026-01-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.063365 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:07 crc kubenswrapper[4873]: E0121 00:07:07.063534 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.131771 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.131860 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.131879 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.131908 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.131925 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:07Z","lastTransitionTime":"2026-01-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.184034 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:12:27.962561985 +0000 UTC Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.234240 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.234281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.234313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.234327 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.234336 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:07Z","lastTransitionTime":"2026-01-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.337376 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.337432 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.337445 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.337465 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.337480 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:07Z","lastTransitionTime":"2026-01-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.439779 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.439851 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.439864 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.439881 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.439896 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:07Z","lastTransitionTime":"2026-01-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.542686 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.542726 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.542738 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.542755 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.542767 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:07Z","lastTransitionTime":"2026-01-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.646151 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.646202 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.646221 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.646244 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.646262 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:07Z","lastTransitionTime":"2026-01-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.749374 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.749642 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.749737 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.749815 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.749877 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:07Z","lastTransitionTime":"2026-01-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.852731 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.852797 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.852814 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.852841 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.852858 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:07Z","lastTransitionTime":"2026-01-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.955953 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.956002 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.956015 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.956032 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:07 crc kubenswrapper[4873]: I0121 00:07:07.956045 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:07Z","lastTransitionTime":"2026-01-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.058144 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.058189 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.058200 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.058218 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.058230 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:08Z","lastTransitionTime":"2026-01-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.062838 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.062846 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:08 crc kubenswrapper[4873]: E0121 00:07:08.063256 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.062898 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:08 crc kubenswrapper[4873]: E0121 00:07:08.063427 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:08 crc kubenswrapper[4873]: E0121 00:07:08.063266 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.080264 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.101229 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.115996 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.135202 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.156054 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.160444 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.160503 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.160518 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.160560 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.160578 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:08Z","lastTransitionTime":"2026-01-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.171213 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.184576 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:19:19.060201944 +0000 UTC Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.185934 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.204236 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.233940 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:55Z\\\",\\\"message\\\":\\\"ller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 00:06:54.962924 6519 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.250838 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.263237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.263276 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.263291 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.263310 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.263324 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:08Z","lastTransitionTime":"2026-01-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.270771 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.287879 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.306593 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a5088f6-1abd-4dba-be64-343625f47027\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b93378e6885106059024e384222a61e278c97aca13a5a52dec9288e75b5c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d073996d12906cde4ab8832502f3f018ad7d2e077d136ac642624119c4f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19802a3be018fd2df97fe3ec6fd6082f5ea346ae143dc87787c16c16b2dcb0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.338849 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.360830 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.365906 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.365963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.365987 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.366016 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.366037 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:08Z","lastTransitionTime":"2026-01-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.385739 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.401641 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.417094 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:08Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.469334 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.469385 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.469402 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.469423 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.469442 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:08Z","lastTransitionTime":"2026-01-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.571655 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.571781 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.571806 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.571839 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.571860 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:08Z","lastTransitionTime":"2026-01-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.674843 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.674981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.675003 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.675027 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.675045 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:08Z","lastTransitionTime":"2026-01-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.777493 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.777994 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.778146 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.778296 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.778435 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:08Z","lastTransitionTime":"2026-01-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.881815 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.881883 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.881897 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.881919 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.881933 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:08Z","lastTransitionTime":"2026-01-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.984012 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.984075 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.984089 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.984106 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:08 crc kubenswrapper[4873]: I0121 00:07:08.984118 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:08Z","lastTransitionTime":"2026-01-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.062541 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:09 crc kubenswrapper[4873]: E0121 00:07:09.062689 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.063356 4873 scope.go:117] "RemoveContainer" containerID="c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8" Jan 21 00:07:09 crc kubenswrapper[4873]: E0121 00:07:09.063529 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.087424 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.087472 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.087484 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.087502 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.087514 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:09Z","lastTransitionTime":"2026-01-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.185905 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:52:23.406562838 +0000 UTC Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.192185 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.192232 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.192248 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.192269 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.192285 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:09Z","lastTransitionTime":"2026-01-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.294090 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.294135 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.294145 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.294161 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.294170 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:09Z","lastTransitionTime":"2026-01-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.397245 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.397286 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.397301 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.397319 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.397330 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:09Z","lastTransitionTime":"2026-01-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.500230 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.500266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.500276 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.500316 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.500325 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:09Z","lastTransitionTime":"2026-01-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.603884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.603956 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.603975 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.604001 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.604018 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:09Z","lastTransitionTime":"2026-01-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.706490 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.706532 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.706539 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.706578 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.706587 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:09Z","lastTransitionTime":"2026-01-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.809633 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.809716 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.809748 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.809779 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.809798 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:09Z","lastTransitionTime":"2026-01-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.912896 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.912946 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.912959 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.912978 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:09 crc kubenswrapper[4873]: I0121 00:07:09.912991 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:09Z","lastTransitionTime":"2026-01-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.016568 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.016630 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.016648 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.016679 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.016702 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:10Z","lastTransitionTime":"2026-01-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.063228 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.063237 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.063528 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:10 crc kubenswrapper[4873]: E0121 00:07:10.063734 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:10 crc kubenswrapper[4873]: E0121 00:07:10.063856 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:10 crc kubenswrapper[4873]: E0121 00:07:10.063992 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.119497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.119606 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.119624 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.119649 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.119668 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:10Z","lastTransitionTime":"2026-01-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.186961 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:02:54.572536001 +0000 UTC Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.222943 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.223012 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.223030 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.223053 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.223069 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:10Z","lastTransitionTime":"2026-01-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.326231 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.326325 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.326350 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.326384 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.326408 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:10Z","lastTransitionTime":"2026-01-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.456095 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.456187 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.456208 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.456230 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.456282 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:10Z","lastTransitionTime":"2026-01-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.558491 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.558634 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.558652 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.558681 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.558698 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:10Z","lastTransitionTime":"2026-01-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.660840 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.660892 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.660904 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.660923 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.660936 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:10Z","lastTransitionTime":"2026-01-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.763790 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.763848 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.763866 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.763891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.763909 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:10Z","lastTransitionTime":"2026-01-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.866247 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.866301 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.866312 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.866330 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.866342 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:10Z","lastTransitionTime":"2026-01-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.969775 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.969822 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.969833 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.969851 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:10 crc kubenswrapper[4873]: I0121 00:07:10.969863 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:10Z","lastTransitionTime":"2026-01-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.062789 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:11 crc kubenswrapper[4873]: E0121 00:07:11.063017 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.073112 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.073324 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.073395 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.073476 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.073571 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:11Z","lastTransitionTime":"2026-01-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.176180 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.176232 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.176245 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.176262 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.176300 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:11Z","lastTransitionTime":"2026-01-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.187319 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:00:34.27416803 +0000 UTC Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.279395 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.279831 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.279999 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.280189 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.280323 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:11Z","lastTransitionTime":"2026-01-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.383587 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.383649 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.383674 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.383719 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.383749 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:11Z","lastTransitionTime":"2026-01-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.486383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.486446 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.486458 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.486475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.486487 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:11Z","lastTransitionTime":"2026-01-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.589158 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.589199 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.589211 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.589228 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.589242 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:11Z","lastTransitionTime":"2026-01-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.691672 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.691738 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.691748 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.691769 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.691791 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:11Z","lastTransitionTime":"2026-01-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.794938 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.794999 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.795019 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.795044 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.795063 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:11Z","lastTransitionTime":"2026-01-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.898173 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.898240 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.898263 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.898293 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:11 crc kubenswrapper[4873]: I0121 00:07:11.898318 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:11Z","lastTransitionTime":"2026-01-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.000483 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.000520 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.000530 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.000565 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.000576 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.062975 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.063031 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.063183 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:12 crc kubenswrapper[4873]: E0121 00:07:12.063174 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:12 crc kubenswrapper[4873]: E0121 00:07:12.063300 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:12 crc kubenswrapper[4873]: E0121 00:07:12.063390 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.103461 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.103536 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.103577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.103602 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.103617 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.188159 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:23:28.528489014 +0000 UTC Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.206599 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.206679 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.206700 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.206757 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.206777 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.208426 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.208470 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.208483 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.208509 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.208523 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: E0121 00:07:12.227412 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.231284 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.231348 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.231364 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.231392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.231409 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: E0121 00:07:12.244579 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.247581 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.247607 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.247618 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.247634 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.247644 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: E0121 00:07:12.261012 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.265405 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.265573 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.265697 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.265809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.265924 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: E0121 00:07:12.286376 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.290616 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.290659 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.290667 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.290684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.290695 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: E0121 00:07:12.305684 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:12 crc kubenswrapper[4873]: E0121 00:07:12.305830 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.308869 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.309028 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.309125 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.309226 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.309317 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.412679 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.412741 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.412752 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.412773 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.412787 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.515623 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.516078 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.516212 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.516359 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.516502 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.621263 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.621680 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.621862 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.622235 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.622487 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.725523 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.725615 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.725633 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.725657 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.725675 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.830253 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.830308 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.830318 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.830334 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.830344 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.934629 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.934709 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.934725 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.934752 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:12 crc kubenswrapper[4873]: I0121 00:07:12.934769 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:12Z","lastTransitionTime":"2026-01-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.037827 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.037886 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.037896 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.037915 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.037928 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:13Z","lastTransitionTime":"2026-01-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.062458 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:13 crc kubenswrapper[4873]: E0121 00:07:13.062642 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.141218 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.141256 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.141266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.141282 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.141293 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:13Z","lastTransitionTime":"2026-01-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.188448 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:33:27.685777712 +0000 UTC Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.244221 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.244260 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.244270 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.244285 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.244759 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:13Z","lastTransitionTime":"2026-01-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.346940 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.346976 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.346986 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.347000 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.347012 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:13Z","lastTransitionTime":"2026-01-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.449646 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.449686 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.449706 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.449723 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.449735 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:13Z","lastTransitionTime":"2026-01-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.551848 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.551891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.551899 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.551912 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.551921 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:13Z","lastTransitionTime":"2026-01-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.655106 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.655170 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.655180 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.655197 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.655209 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:13Z","lastTransitionTime":"2026-01-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.758287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.758329 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.758342 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.758361 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.758372 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:13Z","lastTransitionTime":"2026-01-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.860168 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.860233 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.860246 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.860260 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.860271 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:13Z","lastTransitionTime":"2026-01-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.963372 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.963466 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.963477 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.963494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:13 crc kubenswrapper[4873]: I0121 00:07:13.963503 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:13Z","lastTransitionTime":"2026-01-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.063071 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:14 crc kubenswrapper[4873]: E0121 00:07:14.063214 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.063071 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.063275 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:14 crc kubenswrapper[4873]: E0121 00:07:14.063354 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:14 crc kubenswrapper[4873]: E0121 00:07:14.063420 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.065371 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.065452 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.065498 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.065521 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.065536 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:14Z","lastTransitionTime":"2026-01-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.169428 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.169496 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.169512 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.169538 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.169578 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:14Z","lastTransitionTime":"2026-01-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.189321 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:50:05.658437529 +0000 UTC Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.272454 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.272521 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.272539 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.272604 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.272625 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:14Z","lastTransitionTime":"2026-01-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.375958 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.376021 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.376041 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.376072 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.376096 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:14Z","lastTransitionTime":"2026-01-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.478898 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.478965 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.478980 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.479000 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.479015 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:14Z","lastTransitionTime":"2026-01-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.582596 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.582663 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.582679 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.582695 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.582708 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:14Z","lastTransitionTime":"2026-01-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.685969 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.686045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.686054 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.686088 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.686098 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:14Z","lastTransitionTime":"2026-01-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.686582 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:14 crc kubenswrapper[4873]: E0121 00:07:14.686700 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:07:14 crc kubenswrapper[4873]: E0121 00:07:14.686767 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs podName:c7f7e62f-ce78-4588-994f-8ab17d7821d1 nodeName:}" failed. No retries permitted until 2026-01-21 00:07:46.686748866 +0000 UTC m=+98.926616512 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs") pod "network-metrics-daemon-mx2js" (UID: "c7f7e62f-ce78-4588-994f-8ab17d7821d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.789006 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.789071 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.789084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.789107 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.789121 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:14Z","lastTransitionTime":"2026-01-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.891983 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.892027 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.892039 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.892056 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.892065 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:14Z","lastTransitionTime":"2026-01-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.995255 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.995293 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.995305 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.995324 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:14 crc kubenswrapper[4873]: I0121 00:07:14.995336 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:14Z","lastTransitionTime":"2026-01-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.062687 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:15 crc kubenswrapper[4873]: E0121 00:07:15.062884 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.099609 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.101963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.102005 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.102042 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.102063 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:15Z","lastTransitionTime":"2026-01-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.189755 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 11:04:59.995042075 +0000 UTC Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.205623 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.205793 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.205850 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.205927 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.205984 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:15Z","lastTransitionTime":"2026-01-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.309404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.309456 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.309472 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.309504 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.309520 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:15Z","lastTransitionTime":"2026-01-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.412869 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.413334 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.413515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.413765 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.413947 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:15Z","lastTransitionTime":"2026-01-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.474804 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/0.log" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.474878 4873 generic.go:334] "Generic (PLEG): container finished" podID="fc2b4503-97f2-44cb-a1ad-e558df352294" containerID="4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5" exitCode=1 Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.474926 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nfrvx" event={"ID":"fc2b4503-97f2-44cb-a1ad-e558df352294","Type":"ContainerDied","Data":"4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5"} Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.475486 4873 scope.go:117] "RemoveContainer" containerID="4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.489943 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.503613 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.517613 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.517664 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.517675 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.517693 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.517706 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:15Z","lastTransitionTime":"2026-01-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.522183 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.539307 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.556520 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.572348 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:14Z\\\",\\\"message\\\":\\\"2026-01-21T00:06:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844\\\\n2026-01-21T00:06:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844 to /host/opt/cni/bin/\\\\n2026-01-21T00:06:29Z [verbose] multus-daemon started\\\\n2026-01-21T00:06:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T00:07:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.597481 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:55Z\\\",\\\"message\\\":\\\"ller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 00:06:54.962924 6519 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.616774 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.621349 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.621397 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.621420 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.621443 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.621457 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:15Z","lastTransitionTime":"2026-01-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.629959 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.649993 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.668513 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.687636 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.701478 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.713018 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.724063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.724118 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.724129 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.724152 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.724165 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:15Z","lastTransitionTime":"2026-01-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.729535 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.745545 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a5088f6-1abd-4dba-be64-343625f47027\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b93378e6885106059024e384222a61e278c97aca13a5a52dec9288e75b5c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d073996d12906cde4ab8832502f3f018ad7d2e077d136ac642624119c4f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19802a3be018fd2df97fe3ec6fd6082f5ea346ae143dc87787c16c16b2dcb0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.765584 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.782631 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:15Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.826847 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.826883 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.826916 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.826934 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.826943 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:15Z","lastTransitionTime":"2026-01-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.929579 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.929612 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.929620 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.929649 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:15 crc kubenswrapper[4873]: I0121 00:07:15.929658 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:15Z","lastTransitionTime":"2026-01-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.032596 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.032639 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.032649 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.032669 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.032712 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:16Z","lastTransitionTime":"2026-01-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.062718 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.063123 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:16 crc kubenswrapper[4873]: E0121 00:07:16.063249 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.063353 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:16 crc kubenswrapper[4873]: E0121 00:07:16.063411 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:16 crc kubenswrapper[4873]: E0121 00:07:16.063842 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.079136 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.136811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.136855 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.136867 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.136886 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.136899 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:16Z","lastTransitionTime":"2026-01-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.190020 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:14:59.965800833 +0000 UTC Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.239737 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.239800 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.239810 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.239827 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.239841 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:16Z","lastTransitionTime":"2026-01-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.342375 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.342411 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.342421 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.342438 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.342451 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:16Z","lastTransitionTime":"2026-01-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.445436 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.445715 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.445892 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.446041 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.446163 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:16Z","lastTransitionTime":"2026-01-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.481299 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/0.log" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.481442 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nfrvx" event={"ID":"fc2b4503-97f2-44cb-a1ad-e558df352294","Type":"ContainerStarted","Data":"739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.503395 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.517189 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.534957 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.548984 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.549013 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.549021 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.549035 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.549045 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:16Z","lastTransitionTime":"2026-01-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.551286 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.567698 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.579452 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43d5a9c4-d31f-477b-9785-02293b43736a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01676aea70c76bbd3f85ef64bf5dd1bcf7b54c811fa56187a74b3cbf811436f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.593506 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.605753 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.623948 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:14Z\\\",\\\"message\\\":\\\"2026-01-21T00:06:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844\\\\n2026-01-21T00:06:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844 to /host/opt/cni/bin/\\\\n2026-01-21T00:06:29Z [verbose] multus-daemon started\\\\n2026-01-21T00:06:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T00:07:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.647346 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:55Z\\\",\\\"message\\\":\\\"ller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 00:06:54.962924 6519 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.651451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.651714 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.651790 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.651857 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.651926 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:16Z","lastTransitionTime":"2026-01-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.664295 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.682579 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.700055 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.712012 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a5088f6-1abd-4dba-be64-343625f47027\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b93378e6885106059024e384222a61e278c97aca13a5a52dec9288e75b5c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d073996d12906cde4ab8832502f3f018ad7d2e077d136ac642624119c4f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19802a3be018fd2df97fe3ec6fd6082f5ea346ae143dc87787c16c16b2dcb0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.732616 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.750022 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.754222 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.754259 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.754275 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.754296 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.754312 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:16Z","lastTransitionTime":"2026-01-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.765711 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.780443 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.792647 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.856738 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.856792 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.856805 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.856826 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.856839 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:16Z","lastTransitionTime":"2026-01-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.960148 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.960715 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.960830 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.960957 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:16 crc kubenswrapper[4873]: I0121 00:07:16.961104 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:16Z","lastTransitionTime":"2026-01-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.062443 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:17 crc kubenswrapper[4873]: E0121 00:07:17.062668 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.063419 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.063459 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.063474 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.063493 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.063506 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:17Z","lastTransitionTime":"2026-01-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.165443 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.165481 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.165492 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.165509 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.165521 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:17Z","lastTransitionTime":"2026-01-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.191268 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:17:36.957246418 +0000 UTC Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.268339 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.268373 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.268382 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.268395 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.268417 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:17Z","lastTransitionTime":"2026-01-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.371082 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.371165 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.371186 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.371210 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.371230 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:17Z","lastTransitionTime":"2026-01-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.474723 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.474811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.474840 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.474856 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.474866 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:17Z","lastTransitionTime":"2026-01-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.578392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.578477 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.578495 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.578518 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.578535 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:17Z","lastTransitionTime":"2026-01-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.681709 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.681758 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.681767 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.681783 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.681793 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:17Z","lastTransitionTime":"2026-01-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.785498 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.785584 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.785598 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.785621 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.785633 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:17Z","lastTransitionTime":"2026-01-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.889311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.889361 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.889378 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.889402 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.889418 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:17Z","lastTransitionTime":"2026-01-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.991907 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.991962 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.991980 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.992001 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:17 crc kubenswrapper[4873]: I0121 00:07:17.992018 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:17Z","lastTransitionTime":"2026-01-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.063493 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:18 crc kubenswrapper[4873]: E0121 00:07:18.063706 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.063761 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:18 crc kubenswrapper[4873]: E0121 00:07:18.063922 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.064103 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:18 crc kubenswrapper[4873]: E0121 00:07:18.064231 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.078370 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.096951 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.097020 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.097039 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.097065 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.097082 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:18Z","lastTransitionTime":"2026-01-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.103692 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.120879 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.134212 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a5088f6-1abd-4dba-be64-343625f47027\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b93378e6885106059024e384222a61e278c97aca13a5a52dec9288e75b5c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d073996d12906cde4ab8832502f3f018ad7d2e077d136ac642624119c4f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19802a3be018fd2df97fe3ec6fd6082f5ea346ae143dc87787c16c16b2dcb0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.155713 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.176288 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.190854 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.191905 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:40:14.459090846 +0000 UTC Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.199266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.199592 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.199691 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.199832 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.199973 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:18Z","lastTransitionTime":"2026-01-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.205028 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.216350 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.232254 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.247454 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.264033 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.277316 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.289429 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.300407 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43d5a9c4-d31f-477b-9785-02293b43736a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01676aea70c76bbd3f85ef64bf5dd1bcf7b54c811fa56187a74b3cbf811436f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.302143 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.302208 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.302227 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.302251 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.302267 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:18Z","lastTransitionTime":"2026-01-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.311605 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.321986 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.333144 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:14Z\\\",\\\"message\\\":\\\"2026-01-21T00:06:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844\\\\n2026-01-21T00:06:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844 to /host/opt/cni/bin/\\\\n2026-01-21T00:06:29Z [verbose] multus-daemon started\\\\n2026-01-21T00:06:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T00:07:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.349446 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:55Z\\\",\\\"message\\\":\\\"ller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 00:06:54.962924 6519 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:18Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.404275 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.404344 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.404361 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.404391 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.404410 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:18Z","lastTransitionTime":"2026-01-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.507441 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.507829 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.508004 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.508150 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.508286 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:18Z","lastTransitionTime":"2026-01-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.610016 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.610069 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.610078 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.610094 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.610110 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:18Z","lastTransitionTime":"2026-01-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.713715 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.713782 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.713797 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.713817 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.713855 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:18Z","lastTransitionTime":"2026-01-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.817489 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.817542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.817582 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.817596 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.817605 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:18Z","lastTransitionTime":"2026-01-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.919707 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.920007 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.920074 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.920144 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:18 crc kubenswrapper[4873]: I0121 00:07:18.920208 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:18Z","lastTransitionTime":"2026-01-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.022665 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.022722 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.022731 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.022745 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.022754 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:19Z","lastTransitionTime":"2026-01-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.063229 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:19 crc kubenswrapper[4873]: E0121 00:07:19.063328 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.125802 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.125857 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.125868 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.125882 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.125915 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:19Z","lastTransitionTime":"2026-01-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.192943 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:53:31.490800334 +0000 UTC Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.228882 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.228927 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.228944 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.228967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.228984 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:19Z","lastTransitionTime":"2026-01-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.332005 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.332056 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.332067 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.332084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.332096 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:19Z","lastTransitionTime":"2026-01-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.435184 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.435247 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.435259 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.435275 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.435287 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:19Z","lastTransitionTime":"2026-01-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.539644 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.539709 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.539731 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.539970 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.540005 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:19Z","lastTransitionTime":"2026-01-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.642889 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.642938 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.642949 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.642967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.642980 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:19Z","lastTransitionTime":"2026-01-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.745457 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.745497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.745505 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.745516 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.745523 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:19Z","lastTransitionTime":"2026-01-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.847930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.847999 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.848017 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.848042 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.848060 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:19Z","lastTransitionTime":"2026-01-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.951978 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.952063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.952075 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.952095 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:19 crc kubenswrapper[4873]: I0121 00:07:19.952111 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:19Z","lastTransitionTime":"2026-01-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.055198 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.055242 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.055275 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.055294 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.055306 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:20Z","lastTransitionTime":"2026-01-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.062602 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.062919 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:20 crc kubenswrapper[4873]: E0121 00:07:20.063039 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.063202 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.063247 4873 scope.go:117] "RemoveContainer" containerID="c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8" Jan 21 00:07:20 crc kubenswrapper[4873]: E0121 00:07:20.063270 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:20 crc kubenswrapper[4873]: E0121 00:07:20.063410 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.158129 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.158480 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.158495 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.158515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.158530 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:20Z","lastTransitionTime":"2026-01-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.194092 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:09:49.515937379 +0000 UTC Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.261298 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.261342 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.261354 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.261377 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.261389 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:20Z","lastTransitionTime":"2026-01-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.364362 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.364394 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.364403 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.364417 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.364427 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:20Z","lastTransitionTime":"2026-01-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.467639 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.467709 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.467746 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.467770 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.467783 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:20Z","lastTransitionTime":"2026-01-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.496891 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/2.log" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.500512 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.501058 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.522260 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.538031 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.551146 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.563190 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.570746 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.570798 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.570814 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.570833 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.570846 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:20Z","lastTransitionTime":"2026-01-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.579995 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.593627 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.609701 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.627700 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:14Z\\\",\\\"message\\\":\\\"2026-01-21T00:06:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844\\\\n2026-01-21T00:06:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844 to /host/opt/cni/bin/\\\\n2026-01-21T00:06:29Z [verbose] multus-daemon started\\\\n2026-01-21T00:06:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T00:07:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.648645 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:55Z\\\",\\\"message\\\":\\\"ller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 00:06:54.962924 6519 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.662421 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43d5a9c4-d31f-477b-9785-02293b43736a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01676aea70c76bbd3f85ef64bf5dd1bcf7b54c811fa56187a74b3cbf811436f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.674114 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.674156 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.674167 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.674183 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.674197 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:20Z","lastTransitionTime":"2026-01-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.677904 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.696429 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.723705 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.746391 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.765338 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.776987 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.777327 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.777440 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.777542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.777647 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:20Z","lastTransitionTime":"2026-01-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.781865 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.797012 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.809296 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.823176 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a5088f6-1abd-4dba-be64-343625f47027\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b93378e6885106059024e384222a61e278c97aca13a5a52dec9288e75b5c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d073996d12906cde4ab8832502f3f018ad7d2e077d136ac642624119c4f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19802a3be018fd2df97fe3ec6fd6082f5ea346ae143dc87787c16c16b2dcb0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.880064 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.880501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.880626 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.880730 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.880810 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:20Z","lastTransitionTime":"2026-01-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.983535 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.983593 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.983607 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.983623 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:20 crc kubenswrapper[4873]: I0121 00:07:20.983632 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:20Z","lastTransitionTime":"2026-01-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.062956 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:21 crc kubenswrapper[4873]: E0121 00:07:21.063159 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.086426 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.086494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.086511 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.086536 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.086650 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:21Z","lastTransitionTime":"2026-01-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.189536 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.189696 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.189720 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.189741 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.189755 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:21Z","lastTransitionTime":"2026-01-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.195081 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:10:46.514288509 +0000 UTC Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.291933 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.291985 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.291998 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.292017 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.292030 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:21Z","lastTransitionTime":"2026-01-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.394312 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.394348 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.394356 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.394370 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.394380 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:21Z","lastTransitionTime":"2026-01-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.497712 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.497787 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.497810 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.497846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.497873 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:21Z","lastTransitionTime":"2026-01-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.508115 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/3.log" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.509095 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/2.log" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.513592 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" exitCode=1 Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.513652 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3"} Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.513695 4873 scope.go:117] "RemoveContainer" containerID="c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.514828 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:07:21 crc kubenswrapper[4873]: E0121 00:07:21.515168 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.540412 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.559672 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.577887 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.593308 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.600612 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.600798 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.600870 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.600945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.601006 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:21Z","lastTransitionTime":"2026-01-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.609381 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.623382 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43d5a9c4-d31f-477b-9785-02293b43736a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01676aea70c76bbd3f85ef64bf5dd1bcf7b54c811fa56187a74b3cbf811436f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.635912 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.648588 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.666235 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:14Z\\\",\\\"message\\\":\\\"2026-01-21T00:06:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844\\\\n2026-01-21T00:06:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844 to /host/opt/cni/bin/\\\\n2026-01-21T00:06:29Z [verbose] multus-daemon started\\\\n2026-01-21T00:06:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T00:07:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.687273 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9bf8685eb47841c034f1f16bb2f569e43e60b703e0a105b083833666ba9b1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:06:55Z\\\",\\\"message\\\":\\\"ller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:06:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 00:06:54.962924 6519 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:21Z\\\",\\\"message\\\":\\\"930691 6913 services_controller.go:443] Built service openshift-authentication-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.150\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 00:07:20.930701 6913 services_controller.go:444] Built service openshift-authentication-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 00:07:20.930708 6913 services_controller.go:445] Built service openshift-authentication-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 00:07:20.930658 6913 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-nhxdr in node crc\\\\nI0121 00:07:20.930751 6913 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-nhxdr after 0 failed attempt(s)\\\\nI0121 00:07:20.930778 6913 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nhxdr\\\\nF0121 00:07:20.930600 6913 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.702354 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.703577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.703760 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.703862 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.703949 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.704038 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:21Z","lastTransitionTime":"2026-01-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.715682 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.737398 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.748658 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.762681 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a5088f6-1abd-4dba-be64-343625f47027\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b93378e6885106059024e384222a61e278c97aca13a5a52dec9288e75b5c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d073996d12906cde4ab8832502f3f018ad7d2e077d136ac642624119c4f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19802a3be018fd2df97fe3ec6fd6082f5ea346ae143dc87787c16c16b2dcb0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.784236 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.798381 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.807177 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.807236 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.807247 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.807270 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.807286 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:21Z","lastTransitionTime":"2026-01-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.821948 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.836006 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:21Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.910682 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.910744 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.910758 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.910787 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:21 crc kubenswrapper[4873]: I0121 00:07:21.910803 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:21Z","lastTransitionTime":"2026-01-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.014122 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.014179 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.014195 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.014217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.014236 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.062853 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:22 crc kubenswrapper[4873]: E0121 00:07:22.062986 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.063037 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.062866 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:22 crc kubenswrapper[4873]: E0121 00:07:22.063192 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:22 crc kubenswrapper[4873]: E0121 00:07:22.063228 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.116671 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.116759 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.116777 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.116796 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.116811 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.195643 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:48:25.366700022 +0000 UTC Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.219117 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.219190 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.219204 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.219250 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.219265 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.321878 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.322002 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.322028 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.322111 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.322129 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.425135 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.425205 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.425237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.425266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.425287 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.520389 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/3.log" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.525917 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:07:22 crc kubenswrapper[4873]: E0121 00:07:22.527964 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.528538 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.528605 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.528618 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.528639 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.528652 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.544449 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.559599 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.580237 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.602163 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.619085 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.627383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.627419 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.627430 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.627447 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.627460 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.634264 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: E0121 00:07:22.643250 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.648768 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.648806 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.648817 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.648833 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.648845 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.650720 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:14Z\\\",\\\"message\\\":\\\"2026-01-21T00:06:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844\\\\n2026-01-21T00:06:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844 to /host/opt/cni/bin/\\\\n2026-01-21T00:06:29Z [verbose] multus-daemon started\\\\n2026-01-21T00:06:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T00:07:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: E0121 00:07:22.664128 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.669027 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.669064 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.669079 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.669100 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.669116 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.674476 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:21Z\\\",\\\"message\\\":\\\"930691 6913 services_controller.go:443] Built service openshift-authentication-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.150\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 00:07:20.930701 6913 services_controller.go:444] Built service openshift-authentication-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 00:07:20.930708 6913 services_controller.go:445] Built service openshift-authentication-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 00:07:20.930658 6913 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-nhxdr in node crc\\\\nI0121 00:07:20.930751 6913 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-nhxdr after 0 failed attempt(s)\\\\nI0121 00:07:20.930778 6913 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nhxdr\\\\nF0121 00:07:20.930600 6913 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:07:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: E0121 00:07:22.684215 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.688366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.688688 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.688828 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.689008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.689162 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.688860 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43d5a9c4-d31f-477b-9785-02293b43736a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01676aea70c76bbd3f85ef64bf5dd1bcf7b54c811fa56187a74b3cbf811436f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.703182 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: E0121 00:07:22.707160 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.710963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.711109 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.711189 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.711269 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.711351 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.721386 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: E0121 00:07:22.725692 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: E0121 00:07:22.725918 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.728146 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.728193 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.728210 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.728234 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.728254 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.737681 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.755250 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.774173 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.790400 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.805684 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.819241 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.831525 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.831612 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.831632 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.831657 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.831675 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.839030 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a5088f6-1abd-4dba-be64-343625f47027\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b93378e6885106059024e384222a61e278c97aca13a5a52dec9288e75b5c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d073996d12906cde4ab8832502f3f018ad7d2e077d136ac642624119c4f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19802a3be018fd2df97fe3ec6fd6082f5ea346ae143dc87787c16c16b2dcb0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.861439 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.935461 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.935599 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.935628 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.935658 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:22 crc kubenswrapper[4873]: I0121 00:07:22.935682 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:22Z","lastTransitionTime":"2026-01-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.039515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.039621 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.039642 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.039668 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.039685 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:23Z","lastTransitionTime":"2026-01-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.062961 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:23 crc kubenswrapper[4873]: E0121 00:07:23.063154 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.145381 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.145515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.145542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.145629 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.145658 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:23Z","lastTransitionTime":"2026-01-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.196473 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:04:44.290825758 +0000 UTC Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.249049 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.249091 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.249104 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.249123 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.249136 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:23Z","lastTransitionTime":"2026-01-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.352039 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.352677 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.352951 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.353155 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.353349 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:23Z","lastTransitionTime":"2026-01-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.456684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.456752 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.456772 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.456796 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.456813 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:23Z","lastTransitionTime":"2026-01-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.559497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.559542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.559570 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.559591 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.559604 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:23Z","lastTransitionTime":"2026-01-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.663292 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.663671 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.663794 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.663900 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.663996 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:23Z","lastTransitionTime":"2026-01-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.766634 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.766679 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.766690 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.766706 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.766716 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:23Z","lastTransitionTime":"2026-01-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.869946 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.870027 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.870053 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.870083 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.870107 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:23Z","lastTransitionTime":"2026-01-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.973371 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.973444 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.973469 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.973502 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:23 crc kubenswrapper[4873]: I0121 00:07:23.973525 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:23Z","lastTransitionTime":"2026-01-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.063295 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.063616 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.063536 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:24 crc kubenswrapper[4873]: E0121 00:07:24.063795 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:24 crc kubenswrapper[4873]: E0121 00:07:24.064068 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:24 crc kubenswrapper[4873]: E0121 00:07:24.064224 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.076432 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.076497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.076515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.076542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.076582 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:24Z","lastTransitionTime":"2026-01-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.181749 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.182124 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.182260 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.182418 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.182593 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:24Z","lastTransitionTime":"2026-01-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.197254 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:25:23.921418126 +0000 UTC Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.286686 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.287060 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.287253 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.287504 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.287707 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:24Z","lastTransitionTime":"2026-01-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.390992 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.391058 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.391082 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.391111 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.391166 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:24Z","lastTransitionTime":"2026-01-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.495866 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.495933 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.495953 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.495981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.496000 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:24Z","lastTransitionTime":"2026-01-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.599988 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.600398 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.600590 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.600755 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.600893 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:24Z","lastTransitionTime":"2026-01-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.704108 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.704170 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.704187 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.704215 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.704233 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:24Z","lastTransitionTime":"2026-01-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.807726 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.808116 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.808304 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.808450 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.808651 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:24Z","lastTransitionTime":"2026-01-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.911115 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.911192 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.911217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.911246 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:24 crc kubenswrapper[4873]: I0121 00:07:24.911268 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:24Z","lastTransitionTime":"2026-01-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.014462 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.014539 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.014586 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.014617 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.014634 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:25Z","lastTransitionTime":"2026-01-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.063152 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:25 crc kubenswrapper[4873]: E0121 00:07:25.063437 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.119180 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.119275 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.119297 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.119331 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.119353 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:25Z","lastTransitionTime":"2026-01-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.198462 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:26:16.506439435 +0000 UTC Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.223013 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.223109 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.223131 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.223168 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.223204 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:25Z","lastTransitionTime":"2026-01-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.329008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.329370 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.329480 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.329583 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.329660 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:25Z","lastTransitionTime":"2026-01-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.432744 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.432784 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.432796 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.432812 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.432826 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:25Z","lastTransitionTime":"2026-01-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.535847 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.535909 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.535928 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.535954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.535975 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:25Z","lastTransitionTime":"2026-01-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.638792 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.639117 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.639222 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.639319 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.639626 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:25Z","lastTransitionTime":"2026-01-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.743124 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.743435 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.743521 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.743678 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.743799 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:25Z","lastTransitionTime":"2026-01-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.846873 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.847238 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.847453 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.847701 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.847885 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:25Z","lastTransitionTime":"2026-01-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.951320 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.951375 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.951391 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.951414 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:25 crc kubenswrapper[4873]: I0121 00:07:25.951431 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:25Z","lastTransitionTime":"2026-01-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.054486 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.054530 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.054573 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.054595 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.054612 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:26Z","lastTransitionTime":"2026-01-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.063142 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.063238 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.063511 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:26 crc kubenswrapper[4873]: E0121 00:07:26.063735 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:26 crc kubenswrapper[4873]: E0121 00:07:26.063866 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:26 crc kubenswrapper[4873]: E0121 00:07:26.064028 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.157816 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.157908 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.157935 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.157970 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.157996 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:26Z","lastTransitionTime":"2026-01-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.199641 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:24:16.951244763 +0000 UTC Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.261052 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.261129 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.261154 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.261181 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.261199 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:26Z","lastTransitionTime":"2026-01-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.363784 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.363828 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.363837 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.363859 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.363871 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:26Z","lastTransitionTime":"2026-01-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.467242 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.468195 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.468367 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.468651 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.468883 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:26Z","lastTransitionTime":"2026-01-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.571771 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.571831 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.571843 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.571875 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.571889 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:26Z","lastTransitionTime":"2026-01-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.675213 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.675270 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.675287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.675310 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.675328 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:26Z","lastTransitionTime":"2026-01-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.780081 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.780279 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.780321 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.780356 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.780378 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:26Z","lastTransitionTime":"2026-01-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.883308 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.883372 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.883390 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.883414 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.883431 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:26Z","lastTransitionTime":"2026-01-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.986332 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.986388 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.986404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.986429 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:26 crc kubenswrapper[4873]: I0121 00:07:26.986448 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:26Z","lastTransitionTime":"2026-01-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.062824 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:27 crc kubenswrapper[4873]: E0121 00:07:27.063068 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.090024 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.090094 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.090110 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.090134 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.090153 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:27Z","lastTransitionTime":"2026-01-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.193679 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.193731 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.193750 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.193775 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.193793 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:27Z","lastTransitionTime":"2026-01-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.200902 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:02:51.431213332 +0000 UTC Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.301418 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.301480 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.301494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.301535 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.301578 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:27Z","lastTransitionTime":"2026-01-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.404911 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.404978 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.404998 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.405023 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.405042 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:27Z","lastTransitionTime":"2026-01-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.508179 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.508258 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.508280 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.508306 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.508322 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:27Z","lastTransitionTime":"2026-01-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.611608 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.611687 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.611705 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.611729 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.611747 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:27Z","lastTransitionTime":"2026-01-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.714712 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.714766 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.714786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.714810 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.714828 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:27Z","lastTransitionTime":"2026-01-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.817328 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.817412 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.817428 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.817451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.817470 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:27Z","lastTransitionTime":"2026-01-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.920430 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.920508 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.920533 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.920603 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:27 crc kubenswrapper[4873]: I0121 00:07:27.920625 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:27Z","lastTransitionTime":"2026-01-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.023349 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.023425 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.023450 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.023483 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.023511 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:28Z","lastTransitionTime":"2026-01-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.062482 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.062614 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:28 crc kubenswrapper[4873]: E0121 00:07:28.062718 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:28 crc kubenswrapper[4873]: E0121 00:07:28.062822 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.063023 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:28 crc kubenswrapper[4873]: E0121 00:07:28.063135 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.080509 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a5088f6-1abd-4dba-be64-343625f47027\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b93378e6885106059024e384222a61e278c97aca13a5a52dec9288e75b5c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d073996d12906cde4ab8832502f3f018ad7d2e077d136ac642624119c4f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19802a3be018fd2df97fe3ec6fd6082f5ea346ae143dc87787c16c16b2dcb0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.103922 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.117490 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.128532 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.128656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.128666 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.128684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.128696 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:28Z","lastTransitionTime":"2026-01-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.135124 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.145653 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.155407 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.170332 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.182614 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.194108 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.201620 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:08:06.380520226 +0000 UTC Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.204571 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.215171 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.226196 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43d5a9c4-d31f-477b-9785-02293b43736a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01676aea70c76bbd3f85ef64bf5dd1bcf7b54c811fa56187a74b3cbf811436f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.230987 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.231055 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.231067 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.231089 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.231100 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:28Z","lastTransitionTime":"2026-01-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.235934 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.250260 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.265289 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:14Z\\\",\\\"message\\\":\\\"2026-01-21T00:06:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844\\\\n2026-01-21T00:06:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844 to /host/opt/cni/bin/\\\\n2026-01-21T00:06:29Z [verbose] multus-daemon started\\\\n2026-01-21T00:06:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T00:07:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.285637 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:21Z\\\",\\\"message\\\":\\\"930691 6913 services_controller.go:443] Built service openshift-authentication-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.150\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 00:07:20.930701 6913 services_controller.go:444] Built service openshift-authentication-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 00:07:20.930708 6913 services_controller.go:445] Built service openshift-authentication-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 00:07:20.930658 6913 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-nhxdr in node crc\\\\nI0121 00:07:20.930751 6913 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-nhxdr after 0 failed attempt(s)\\\\nI0121 00:07:20.930778 6913 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nhxdr\\\\nF0121 00:07:20.930600 6913 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:07:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.299540 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.315565 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.329453 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:28Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.333398 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.333426 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.333435 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.333448 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.333457 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:28Z","lastTransitionTime":"2026-01-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.436795 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.436872 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.436887 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.436911 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.436930 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:28Z","lastTransitionTime":"2026-01-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.539520 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.539599 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.539616 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.539637 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.539654 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:28Z","lastTransitionTime":"2026-01-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.642339 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.642378 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.642385 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.642400 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.642409 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:28Z","lastTransitionTime":"2026-01-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.744717 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.744756 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.744769 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.744786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.744798 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:28Z","lastTransitionTime":"2026-01-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.847139 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.847181 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.847194 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.847209 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.847221 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:28Z","lastTransitionTime":"2026-01-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.949971 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.950009 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.950060 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.950077 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:28 crc kubenswrapper[4873]: I0121 00:07:28.950088 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:28Z","lastTransitionTime":"2026-01-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.053200 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.053231 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.053240 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.053252 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.053261 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:29Z","lastTransitionTime":"2026-01-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.063248 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:29 crc kubenswrapper[4873]: E0121 00:07:29.063338 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.155945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.156232 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.156406 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.156610 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.156788 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:29Z","lastTransitionTime":"2026-01-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.201761 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:04:35.515374564 +0000 UTC Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.259445 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.259513 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.259531 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.259590 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.259612 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:29Z","lastTransitionTime":"2026-01-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.362271 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.362708 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.362918 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.363109 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.363292 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:29Z","lastTransitionTime":"2026-01-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.467418 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.467478 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.467496 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.467519 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.467536 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:29Z","lastTransitionTime":"2026-01-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.570761 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.570838 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.570859 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.570886 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.570906 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:29Z","lastTransitionTime":"2026-01-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.673735 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.673811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.673829 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.673856 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.673878 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:29Z","lastTransitionTime":"2026-01-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.777001 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.777067 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.777090 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.777116 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.777135 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:29Z","lastTransitionTime":"2026-01-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.880619 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.880677 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.880693 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.880717 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.880738 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:29Z","lastTransitionTime":"2026-01-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.983963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.984029 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.984052 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.984082 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:29 crc kubenswrapper[4873]: I0121 00:07:29.984104 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:29Z","lastTransitionTime":"2026-01-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.062836 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.063312 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.062940 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.062897 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.063731 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.063850 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.086659 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.086729 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.086752 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.086781 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.086803 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:30Z","lastTransitionTime":"2026-01-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.190250 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.190623 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.190783 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.190942 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.191082 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:30Z","lastTransitionTime":"2026-01-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.202636 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:48:54.299735421 +0000 UTC Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.266339 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.266527 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.266487933 +0000 UTC m=+146.506355619 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.266647 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.266744 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.266817 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.266870 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.266899 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.266921 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.266976 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.266894 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.266982 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.266963197 +0000 UTC m=+146.506830873 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.267084 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.267154 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.26708962 +0000 UTC m=+146.506957296 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.267218 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.267196053 +0000 UTC m=+146.507063799 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.267347 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.267388 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.267411 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:07:30 crc kubenswrapper[4873]: E0121 00:07:30.267477 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.267461132 +0000 UTC m=+146.507328818 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.293520 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.293570 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.293579 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.293593 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.293601 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:30Z","lastTransitionTime":"2026-01-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.396817 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.396868 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.396879 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.396896 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.396908 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:30Z","lastTransitionTime":"2026-01-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.499856 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.499920 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.499934 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.499955 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.499974 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:30Z","lastTransitionTime":"2026-01-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.603063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.603099 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.603109 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.603138 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.603147 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:30Z","lastTransitionTime":"2026-01-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.705595 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.705636 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.705644 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.705659 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.705668 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:30Z","lastTransitionTime":"2026-01-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.807844 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.807889 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.807899 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.807912 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.807922 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:30Z","lastTransitionTime":"2026-01-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.911749 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.911821 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.911837 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.911863 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:30 crc kubenswrapper[4873]: I0121 00:07:30.911916 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:30Z","lastTransitionTime":"2026-01-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.014847 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.014890 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.014907 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.014923 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.014933 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:31Z","lastTransitionTime":"2026-01-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.062648 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:31 crc kubenswrapper[4873]: E0121 00:07:31.062803 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.117885 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.117945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.117962 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.117980 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.117992 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:31Z","lastTransitionTime":"2026-01-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.203203 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:19:55.669770269 +0000 UTC Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.220874 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.220935 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.220952 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.220976 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.220993 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:31Z","lastTransitionTime":"2026-01-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.324394 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.324462 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.324481 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.324503 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.324520 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:31Z","lastTransitionTime":"2026-01-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.427726 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.427789 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.427805 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.427830 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.427848 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:31Z","lastTransitionTime":"2026-01-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.530895 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.530962 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.530985 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.531012 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.531029 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:31Z","lastTransitionTime":"2026-01-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.634005 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.634711 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.634747 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.634771 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.634787 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:31Z","lastTransitionTime":"2026-01-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.738608 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.738758 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.738779 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.738848 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.738869 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:31Z","lastTransitionTime":"2026-01-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.841866 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.841903 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.841915 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.841931 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.841943 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:31Z","lastTransitionTime":"2026-01-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.945173 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.945228 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.945245 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.945268 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:31 crc kubenswrapper[4873]: I0121 00:07:31.945285 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:31Z","lastTransitionTime":"2026-01-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.048435 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.048741 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.048764 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.048796 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.048820 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.063111 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:32 crc kubenswrapper[4873]: E0121 00:07:32.063285 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.063674 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:32 crc kubenswrapper[4873]: E0121 00:07:32.063832 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.064048 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:32 crc kubenswrapper[4873]: E0121 00:07:32.064417 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.154027 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.154094 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.154111 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.154135 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.154152 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.204364 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:21:09.235167594 +0000 UTC Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.257931 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.258904 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.259141 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.259390 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.259635 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.363405 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.363449 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.363460 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.363479 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.363492 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.465855 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.465923 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.465943 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.465971 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.465990 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.569211 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.569266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.569280 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.569311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.569332 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.672855 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.673479 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.673750 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.673787 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.673874 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.777120 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.777165 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.777177 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.777196 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.777209 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.826386 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.826888 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.827008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.827093 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.827189 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: E0121 00:07:32.841143 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.845457 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.845488 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.845497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.845527 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.845538 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: E0121 00:07:32.860424 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.864860 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.864899 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.864910 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.864929 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.864942 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: E0121 00:07:32.878273 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.883369 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.883577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.883688 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.883808 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.883891 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: E0121 00:07:32.896883 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.900966 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.901004 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.901016 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.901040 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.901054 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:32 crc kubenswrapper[4873]: E0121 00:07:32.915566 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5ea20d25-b537-4725-8df1-c1c72b69bcdb\\\",\\\"systemUUID\\\":\\\"5b5b45e7-e6db-419e-86e2-5c78d53566ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:32Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:32 crc kubenswrapper[4873]: E0121 00:07:32.915694 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.917823 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.917866 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.917877 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.917897 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:32 crc kubenswrapper[4873]: I0121 00:07:32.917909 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:32Z","lastTransitionTime":"2026-01-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.020825 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.020886 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.020899 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.020921 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.020937 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:33Z","lastTransitionTime":"2026-01-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.062727 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:33 crc kubenswrapper[4873]: E0121 00:07:33.062924 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.122954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.123000 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.123010 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.123024 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.123034 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:33Z","lastTransitionTime":"2026-01-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.205080 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:35:42.283730831 +0000 UTC Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.225533 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.225664 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.225689 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.225718 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.225740 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:33Z","lastTransitionTime":"2026-01-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.328446 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.328513 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.328533 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.328593 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.328615 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:33Z","lastTransitionTime":"2026-01-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.432015 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.432199 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.432278 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.432359 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.432443 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:33Z","lastTransitionTime":"2026-01-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.536061 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.536139 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.536155 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.536190 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.536213 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:33Z","lastTransitionTime":"2026-01-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.639637 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.639745 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.639762 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.639786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.639805 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:33Z","lastTransitionTime":"2026-01-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.743669 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.743743 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.743774 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.743811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.743834 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:33Z","lastTransitionTime":"2026-01-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.847340 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.847420 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.847443 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.847475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.847495 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:33Z","lastTransitionTime":"2026-01-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.950708 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.950804 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.950825 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.950857 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:33 crc kubenswrapper[4873]: I0121 00:07:33.950876 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:33Z","lastTransitionTime":"2026-01-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.054811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.055294 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.055497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.055876 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.056062 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:34Z","lastTransitionTime":"2026-01-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.063163 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.063263 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:34 crc kubenswrapper[4873]: E0121 00:07:34.063282 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:34 crc kubenswrapper[4873]: E0121 00:07:34.063468 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.064852 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:34 crc kubenswrapper[4873]: E0121 00:07:34.065078 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.066628 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:07:34 crc kubenswrapper[4873]: E0121 00:07:34.067063 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.159907 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.159993 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.160025 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.160056 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.160089 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:34Z","lastTransitionTime":"2026-01-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.205681 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:42:59.057642575 +0000 UTC Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.262885 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.262941 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.262951 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.262967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.262995 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:34Z","lastTransitionTime":"2026-01-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.366443 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.366513 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.366536 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.366612 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.366638 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:34Z","lastTransitionTime":"2026-01-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.469876 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.469956 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.469974 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.470001 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.470021 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:34Z","lastTransitionTime":"2026-01-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.572852 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.572936 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.572971 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.572995 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.573008 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:34Z","lastTransitionTime":"2026-01-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.677040 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.677111 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.677129 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.677153 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.677169 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:34Z","lastTransitionTime":"2026-01-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.780576 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.780651 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.780667 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.780692 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.780708 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:34Z","lastTransitionTime":"2026-01-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.883582 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.884032 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.884335 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.884515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.884694 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:34Z","lastTransitionTime":"2026-01-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.989000 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.989049 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.989063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.989082 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:34 crc kubenswrapper[4873]: I0121 00:07:34.989095 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:34Z","lastTransitionTime":"2026-01-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.063004 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:35 crc kubenswrapper[4873]: E0121 00:07:35.063628 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.091298 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.091653 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.091798 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.091910 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.092020 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:35Z","lastTransitionTime":"2026-01-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.194954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.194999 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.195012 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.195039 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.195052 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:35Z","lastTransitionTime":"2026-01-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.206364 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:48:53.377352243 +0000 UTC Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.297511 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.297594 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.297634 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.297672 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.297695 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:35Z","lastTransitionTime":"2026-01-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.400323 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.400388 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.400403 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.400425 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.400442 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:35Z","lastTransitionTime":"2026-01-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.503412 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.503503 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.503542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.503635 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.503661 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:35Z","lastTransitionTime":"2026-01-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.606287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.606642 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.606730 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.606808 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.606883 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:35Z","lastTransitionTime":"2026-01-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.710204 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.710632 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.710725 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.710838 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.710920 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:35Z","lastTransitionTime":"2026-01-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.813827 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.813872 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.813883 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.813902 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.813913 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:35Z","lastTransitionTime":"2026-01-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.917133 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.917195 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.917209 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.917230 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:35 crc kubenswrapper[4873]: I0121 00:07:35.917245 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:35Z","lastTransitionTime":"2026-01-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.020584 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.020661 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.020677 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.020699 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.020714 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:36Z","lastTransitionTime":"2026-01-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.063617 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.063702 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:36 crc kubenswrapper[4873]: E0121 00:07:36.063784 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:36 crc kubenswrapper[4873]: E0121 00:07:36.063865 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.063972 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:36 crc kubenswrapper[4873]: E0121 00:07:36.064043 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.124341 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.124390 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.124401 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.124422 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.124437 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:36Z","lastTransitionTime":"2026-01-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.207119 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:47:56.634731305 +0000 UTC Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.227932 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.227978 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.227988 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.228008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.228020 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:36Z","lastTransitionTime":"2026-01-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.331041 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.331092 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.331103 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.331123 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.331138 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:36Z","lastTransitionTime":"2026-01-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.434132 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.434241 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.434258 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.434312 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.434339 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:36Z","lastTransitionTime":"2026-01-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.537492 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.537606 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.537631 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.537669 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.537693 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:36Z","lastTransitionTime":"2026-01-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.641448 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.641523 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.641542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.641653 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.641680 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:36Z","lastTransitionTime":"2026-01-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.751325 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.752233 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.752295 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.752333 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.752357 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:36Z","lastTransitionTime":"2026-01-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.856682 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.856768 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.856793 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.856824 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.856846 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:36Z","lastTransitionTime":"2026-01-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.960306 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.960369 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.960385 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.960409 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:36 crc kubenswrapper[4873]: I0121 00:07:36.960428 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:36Z","lastTransitionTime":"2026-01-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.062489 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:37 crc kubenswrapper[4873]: E0121 00:07:37.062789 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.063827 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.064041 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.064204 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.064365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.064493 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:37Z","lastTransitionTime":"2026-01-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.167846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.167930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.167949 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.167977 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.167999 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:37Z","lastTransitionTime":"2026-01-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.207993 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 10:22:33.739881145 +0000 UTC Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.271265 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.271322 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.271334 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.271355 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.271369 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:37Z","lastTransitionTime":"2026-01-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.375224 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.375281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.375304 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.375329 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.375347 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:37Z","lastTransitionTime":"2026-01-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.478647 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.479076 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.479167 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.479265 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.479333 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:37Z","lastTransitionTime":"2026-01-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.583248 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.583332 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.583351 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.583383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.583403 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:37Z","lastTransitionTime":"2026-01-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.686333 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.686779 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.686860 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.686927 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.686986 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:37Z","lastTransitionTime":"2026-01-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.789965 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.790016 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.790026 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.790048 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.790061 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:37Z","lastTransitionTime":"2026-01-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.893154 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.893207 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.893221 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.893242 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.893258 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:37Z","lastTransitionTime":"2026-01-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.996500 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.996636 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.996660 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.996688 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:37 crc kubenswrapper[4873]: I0121 00:07:37.996710 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:37Z","lastTransitionTime":"2026-01-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.063078 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.063983 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.064103 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:38 crc kubenswrapper[4873]: E0121 00:07:38.064324 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:38 crc kubenswrapper[4873]: E0121 00:07:38.064462 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:38 crc kubenswrapper[4873]: E0121 00:07:38.064620 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.080317 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx2js" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7f7e62f-ce78-4588-994f-8ab17d7821d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txbcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx2js\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.095615 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a5088f6-1abd-4dba-be64-343625f47027\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b93378e6885106059024e384222a61e278c97aca13a5a52dec9288e75b5c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d073996d12906cde4ab8832502f3f018ad7d2e077d136ac642624119c4f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19802a3be018fd2df97fe3ec6fd6082f5ea346ae143dc87787c16c16b2dcb0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79dd5f69ac9ea30214d6de2f6f01978fc5917b283f8ffb723959f98c6e16675b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.101038 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.101101 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.101112 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.101126 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.101138 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:38Z","lastTransitionTime":"2026-01-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.123659 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96233abe-be71-4383-8085-8b1506047eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://096099cce5d322e80e054261fd68275587946e04b8a2acd677faeb03909c4d68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737f1d6e27ce189175ebb15230899ac072bc2a770b9156c90ee5aebc756a1ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed2b0ea998eba111352fffd2a12e3465502a72fde449604a16cd7fe021d32fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78bf96a189cc8a5bff4c65a43d5a7337fd42dcf7a992fb5053dbe1669b4e20f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18d622edefca094c713520b93474500f7b3b774f535535ca032d846f6b6ee53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://049c124c95655a290d87a08b81ee23bd007440fd2f0a8161de7aa968b2bd2389\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a84d6bf6393aa56e6d985750e1636b5df8e23a941f2052e91c276c558af61bdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860bf891e2c814f50f8eaf1dccebeb7eb4c86650de9e41d6107af1a26b1a81e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.141089 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.163961 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bb992b751eea81a1c180a24b014158852e7e938358d5a8c1916be5f4bf36ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b247f247a57c27b6f9f7c7d2994cc52a845914ae115cd594bd07a1fc1378c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.183895 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9fc5804-a6f3-4b7e-b115-68275cb68417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be16ca0f5cc9f9e161b43b94e3da637e1118f8d2ad068af71346c4a9ade78b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bh9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.198600 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69c7f1d8-0551-484a-bea5-b688bb3e0793\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 00:06:20.452036 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 00:06:20.453397 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1513025903/tls.crt::/tmp/serving-cert-1513025903/tls.key\\\\\\\"\\\\nI0121 00:06:26.336285 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:06:26.339361 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:06:26.339439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:06:26.339787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:06:26.339835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:06:26.344862 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:06:26.344903 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344915 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:06:26.344923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:06:26.344929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:06:26.344935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:06:26.344940 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:06:26.344970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:06:26.354901 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.203964 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.204044 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.204061 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.204084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.204100 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:38Z","lastTransitionTime":"2026-01-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.208397 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:06:20.262356041 +0000 UTC Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.216373 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.247909 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.283680 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b6d477f5c2b18a1fec0b106997eac137bb8f3069a839c7221159a13c15e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.303449 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f4e415-7fcf-4690-bfd7-00b180992e24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://152d10bf802b5246ad744f89a111a003ff4d6b9f33e4c54e997712b2c2ef4f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae496196093119ae02ccf37b35a22d7da82b1af965df3003018642df738a5cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbktr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nvxhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.308161 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.308340 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.308459 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.308566 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.308650 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:38Z","lastTransitionTime":"2026-01-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.318263 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43d5a9c4-d31f-477b-9785-02293b43736a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01676aea70c76bbd3f85ef64bf5dd1bcf7b54c811fa56187a74b3cbf811436f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e572dc3e5aa36e4d70097949d4dea1ced0bbf43566ca9e6d715eb82b952da5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.332936 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zc72l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cbfce16-3ec7-4a22-a4ab-8d354fd56332\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81c5af93db99430c7bad6ab55afbea284b7980a36e512c5ccca4671d52cde528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zc72l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.351069 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nhxdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b56634f4-6332-4457-bddd-34a13ac39f0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e0f2d5809d977295724c631b08622b74d15da29fcc727526a1954c364d793b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfjds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nhxdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.368072 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nfrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc2b4503-97f2-44cb-a1ad-e558df352294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:14Z\\\",\\\"message\\\":\\\"2026-01-21T00:06:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844\\\\n2026-01-21T00:06:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f983569-148e-4058-869c-bb392df60844 to /host/opt/cni/bin/\\\\n2026-01-21T00:06:29Z [verbose] multus-daemon started\\\\n2026-01-21T00:06:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T00:07:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4zm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nfrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.402573 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12879027-cbf4-4393-a71e-2a42d8c9f0fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T00:07:21Z\\\",\\\"message\\\":\\\"930691 6913 services_controller.go:443] Built service openshift-authentication-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.150\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 00:07:20.930701 6913 services_controller.go:444] Built service openshift-authentication-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 00:07:20.930708 6913 services_controller.go:445] Built service openshift-authentication-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 00:07:20.930658 6913 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-nhxdr in node crc\\\\nI0121 00:07:20.930751 6913 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-nhxdr after 0 failed attempt(s)\\\\nI0121 00:07:20.930778 6913 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nhxdr\\\\nF0121 00:07:20.930600 6913 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:07:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rt4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbp72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.411718 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.412067 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.412380 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.412638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.412821 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:38Z","lastTransitionTime":"2026-01-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.418204 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"029830ed-a099-4770-90b2-fba28bd1871b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6752720d769688b325584c1444adeecd68b574e9b32a9e37a41e815d4e9ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c4cb554837a371c86205e4417d20f156969f137b50971f966094a968dc05d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e07a8f3f4d9a3b34da0c75de84b02fb4977d10ae96dee97d32f5bbb635ad5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.432900 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f12025a3d58b9c54d96952e90f943b2b18bf2aa1592093b325e2688715c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.450011 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f51bbfee-1a9c-46e8-81aa-e6359268a146\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6555ec6928b27661ade954e9ca4685d1cc5f8698b1b9ff5552a51462d0034d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c689ed0d45f1b9a5a8ea6522a18be5da61db15098c95ac52538a65a6fc31d0d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6b72a8c54c3fdc6c73d6075dec403e026515cebe5b9d1e8be3afb3ce26b435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f2572a718059f173f98278abbaa5efab8180efe845368b362b5c1011af1728\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b23eae61f1910a4f881a2a117d008155e21dd41725d5f3a2101365568b58237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c781017598b3d37c550ffc3c8d17d58537b2a0709fdfb0069079db8a21995da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0786bcfa972f6188bda037a8368daabfb10082fccad10d21647d9b7c8dce45e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjffn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:06:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rd4h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T00:07:38Z is after 2025-08-24T17:21:41Z" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.515981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.516048 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.516066 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.516094 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.516113 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:38Z","lastTransitionTime":"2026-01-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.619184 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.619245 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.619256 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.619275 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.619286 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:38Z","lastTransitionTime":"2026-01-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.722536 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.722617 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.722635 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.722656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.722672 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:38Z","lastTransitionTime":"2026-01-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.825809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.825869 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.825882 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.825905 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.825917 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:38Z","lastTransitionTime":"2026-01-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.929019 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.929082 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.929097 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.929120 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:38 crc kubenswrapper[4873]: I0121 00:07:38.929138 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:38Z","lastTransitionTime":"2026-01-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.032424 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.032539 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.032588 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.032615 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.032633 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:39Z","lastTransitionTime":"2026-01-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.063237 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:39 crc kubenswrapper[4873]: E0121 00:07:39.063637 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.135750 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.135836 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.135850 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.135876 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.135894 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:39Z","lastTransitionTime":"2026-01-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.209198 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:01:03.198653678 +0000 UTC Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.239119 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.239152 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.239161 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.239176 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.239200 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:39Z","lastTransitionTime":"2026-01-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.343005 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.343074 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.343093 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.343120 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.343137 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:39Z","lastTransitionTime":"2026-01-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.446641 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.446929 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.446955 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.446983 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.447004 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:39Z","lastTransitionTime":"2026-01-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.550143 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.550187 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.550197 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.550216 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.550232 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:39Z","lastTransitionTime":"2026-01-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.653635 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.654328 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.654543 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.654727 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.654877 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:39Z","lastTransitionTime":"2026-01-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.757619 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.757806 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.757841 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.757868 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.757888 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:39Z","lastTransitionTime":"2026-01-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.861155 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.861208 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.861218 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.861237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.861253 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:39Z","lastTransitionTime":"2026-01-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.965236 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.965321 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.965334 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.965378 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:39 crc kubenswrapper[4873]: I0121 00:07:39.965392 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:39Z","lastTransitionTime":"2026-01-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.063495 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.063540 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:40 crc kubenswrapper[4873]: E0121 00:07:40.063789 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:40 crc kubenswrapper[4873]: E0121 00:07:40.064018 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.064388 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:40 crc kubenswrapper[4873]: E0121 00:07:40.064623 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.068016 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.068076 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.068091 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.068113 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.068129 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:40Z","lastTransitionTime":"2026-01-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.172081 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.172163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.172201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.172228 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.172241 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:40Z","lastTransitionTime":"2026-01-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.210299 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:57:35.837880507 +0000 UTC Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.278578 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.278622 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.278631 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.278650 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.278661 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:40Z","lastTransitionTime":"2026-01-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.382333 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.382394 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.382405 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.382426 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.382439 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:40Z","lastTransitionTime":"2026-01-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.486846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.486904 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.486913 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.486938 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.486953 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:40Z","lastTransitionTime":"2026-01-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.590787 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.590858 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.590871 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.590896 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.590912 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:40Z","lastTransitionTime":"2026-01-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.694725 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.694833 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.694846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.694863 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.694875 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:40Z","lastTransitionTime":"2026-01-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.797980 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.798036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.798046 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.798064 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.798076 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:40Z","lastTransitionTime":"2026-01-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.901489 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.901543 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.901618 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.901642 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:40 crc kubenswrapper[4873]: I0121 00:07:40.901657 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:40Z","lastTransitionTime":"2026-01-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.004805 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.004856 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.004869 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.004888 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.004904 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:41Z","lastTransitionTime":"2026-01-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.063352 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:41 crc kubenswrapper[4873]: E0121 00:07:41.063734 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.107241 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.107300 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.107312 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.107341 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.107355 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:41Z","lastTransitionTime":"2026-01-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.368264 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:41:26.875016037 +0000 UTC Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.370583 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.370646 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.370655 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.370712 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.370724 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:41Z","lastTransitionTime":"2026-01-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.474508 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.474651 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.474724 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.474760 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.474785 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:41Z","lastTransitionTime":"2026-01-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.578318 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.578383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.578400 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.578428 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.578447 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:41Z","lastTransitionTime":"2026-01-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.682322 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.682395 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.682407 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.682423 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.682435 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:41Z","lastTransitionTime":"2026-01-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.785383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.785440 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.785457 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.785477 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.785488 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:41Z","lastTransitionTime":"2026-01-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.889436 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.889496 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.889515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.889539 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.889592 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:41Z","lastTransitionTime":"2026-01-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.992375 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.992419 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.992463 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.992486 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:41 crc kubenswrapper[4873]: I0121 00:07:41.992498 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:41Z","lastTransitionTime":"2026-01-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.063341 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:42 crc kubenswrapper[4873]: E0121 00:07:42.063465 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.063460 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.063661 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:42 crc kubenswrapper[4873]: E0121 00:07:42.063773 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:42 crc kubenswrapper[4873]: E0121 00:07:42.063831 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.095692 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.095737 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.095750 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.095765 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.095776 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:42Z","lastTransitionTime":"2026-01-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.198816 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.198864 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.198873 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.198890 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.198905 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:42Z","lastTransitionTime":"2026-01-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.301199 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.301257 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.301267 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.301286 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.301297 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:42Z","lastTransitionTime":"2026-01-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.368991 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:21:42.592828041 +0000 UTC Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.404376 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.404440 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.404454 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.404475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.404490 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:42Z","lastTransitionTime":"2026-01-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.507778 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.507841 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.507853 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.507874 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.507891 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:42Z","lastTransitionTime":"2026-01-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.611504 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.611626 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.611643 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.611675 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.611687 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:42Z","lastTransitionTime":"2026-01-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.715113 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.715156 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.715168 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.715187 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.715239 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:42Z","lastTransitionTime":"2026-01-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.817282 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.817329 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.817341 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.817359 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.817373 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:42Z","lastTransitionTime":"2026-01-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.925436 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.925484 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.925498 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.925517 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:42 crc kubenswrapper[4873]: I0121 00:07:42.925529 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:42Z","lastTransitionTime":"2026-01-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.028760 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.028806 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.028819 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.028836 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.028847 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:43Z","lastTransitionTime":"2026-01-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.063441 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:43 crc kubenswrapper[4873]: E0121 00:07:43.064068 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.132615 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.133025 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.133198 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.133359 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.133586 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:43Z","lastTransitionTime":"2026-01-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.236980 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.237059 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.237083 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.237106 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.237123 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:43Z","lastTransitionTime":"2026-01-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.262382 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.262817 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.262958 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.263229 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.263399 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:07:43Z","lastTransitionTime":"2026-01-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.303363 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94"] Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.304758 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.307748 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.308275 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.308440 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.308506 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.323439 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.323409452 podStartE2EDuration="27.323409452s" podCreationTimestamp="2026-01-21 00:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.322772304 +0000 UTC m=+95.562639970" watchObservedRunningTime="2026-01-21 00:07:43.323409452 +0000 UTC m=+95.563277118" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.336521 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-zc72l" podStartSLOduration=76.33649496 podStartE2EDuration="1m16.33649496s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.335641225 +0000 UTC m=+95.575508871" watchObservedRunningTime="2026-01-21 00:07:43.33649496 +0000 UTC m=+95.576362606" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.368445 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nhxdr" podStartSLOduration=76.36842882 podStartE2EDuration="1m16.36842882s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.34866833 +0000 UTC m=+95.588535976" watchObservedRunningTime="2026-01-21 00:07:43.36842882 +0000 UTC m=+95.608296466" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.368595 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-nfrvx" podStartSLOduration=76.368590615 podStartE2EDuration="1m16.368590615s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.368157172 +0000 UTC m=+95.608024818" watchObservedRunningTime="2026-01-21 00:07:43.368590615 +0000 UTC m=+95.608458261" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.369958 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:02:09.955080413 +0000 UTC Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.370019 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.385035 4873 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.421137 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d9d16caf-c710-4169-b406-8071b78d2ca4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.421183 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9d16caf-c710-4169-b406-8071b78d2ca4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.421211 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d9d16caf-c710-4169-b406-8071b78d2ca4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.421231 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d16caf-c710-4169-b406-8071b78d2ca4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.421255 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9d16caf-c710-4169-b406-8071b78d2ca4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.441661 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.44163531 podStartE2EDuration="1m12.44163531s" podCreationTimestamp="2026-01-21 00:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.424540438 +0000 UTC m=+95.664408084" watchObservedRunningTime="2026-01-21 00:07:43.44163531 +0000 UTC m=+95.681502966" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.480726 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rd4h7" podStartSLOduration=76.480710477 podStartE2EDuration="1m16.480710477s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.461650637 +0000 UTC m=+95.701518283" watchObservedRunningTime="2026-01-21 00:07:43.480710477 +0000 UTC m=+95.720578123" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.503073 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.503057401 podStartE2EDuration="41.503057401s" podCreationTimestamp="2026-01-21 00:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.502610237 +0000 UTC m=+95.742477903" watchObservedRunningTime="2026-01-21 00:07:43.503057401 +0000 UTC m=+95.742925047" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.522476 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d9d16caf-c710-4169-b406-8071b78d2ca4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.522522 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9d16caf-c710-4169-b406-8071b78d2ca4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.522567 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d9d16caf-c710-4169-b406-8071b78d2ca4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.522585 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d16caf-c710-4169-b406-8071b78d2ca4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.522611 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9d16caf-c710-4169-b406-8071b78d2ca4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.523330 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9d16caf-c710-4169-b406-8071b78d2ca4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.523376 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d9d16caf-c710-4169-b406-8071b78d2ca4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.523619 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d9d16caf-c710-4169-b406-8071b78d2ca4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.532017 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d16caf-c710-4169-b406-8071b78d2ca4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.536811 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=76.536778883 podStartE2EDuration="1m16.536778883s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.536485365 +0000 UTC m=+95.776353031" watchObservedRunningTime="2026-01-21 00:07:43.536778883 +0000 UTC m=+95.776646529" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.557337 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9d16caf-c710-4169-b406-8071b78d2ca4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xqf94\" (UID: \"d9d16caf-c710-4169-b406-8071b78d2ca4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.594603 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podStartSLOduration=76.594576739 podStartE2EDuration="1m16.594576739s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.59426832 +0000 UTC m=+95.834135976" watchObservedRunningTime="2026-01-21 00:07:43.594576739 +0000 UTC m=+95.834444385" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.628704 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.636644 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.63661586 podStartE2EDuration="1m16.63661586s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.615928894 +0000 UTC m=+95.855796540" watchObservedRunningTime="2026-01-21 00:07:43.63661586 +0000 UTC m=+95.876483506" Jan 21 00:07:43 crc kubenswrapper[4873]: W0121 00:07:43.644711 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9d16caf_c710_4169_b406_8071b78d2ca4.slice/crio-5d09d7cdaee0d7830039d6fce8f61997c5f32cf18f99bf6e179c4854d1628c79 WatchSource:0}: Error finding container 5d09d7cdaee0d7830039d6fce8f61997c5f32cf18f99bf6e179c4854d1628c79: Status 404 returned error can't find the container with id 5d09d7cdaee0d7830039d6fce8f61997c5f32cf18f99bf6e179c4854d1628c79 Jan 21 00:07:43 crc kubenswrapper[4873]: I0121 00:07:43.683604 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nvxhs" podStartSLOduration=75.683584435 podStartE2EDuration="1m15.683584435s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:43.683244074 +0000 UTC m=+95.923111720" watchObservedRunningTime="2026-01-21 00:07:43.683584435 +0000 UTC m=+95.923452081" Jan 21 00:07:44 crc kubenswrapper[4873]: I0121 00:07:44.063570 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:44 crc kubenswrapper[4873]: I0121 00:07:44.063669 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:44 crc kubenswrapper[4873]: E0121 00:07:44.063689 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:44 crc kubenswrapper[4873]: E0121 00:07:44.063856 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:44 crc kubenswrapper[4873]: I0121 00:07:44.064005 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:44 crc kubenswrapper[4873]: E0121 00:07:44.064207 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:44 crc kubenswrapper[4873]: I0121 00:07:44.616490 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" event={"ID":"d9d16caf-c710-4169-b406-8071b78d2ca4","Type":"ContainerStarted","Data":"4d1d87d18bc1231596f1619e5ff38ebfe6989338dc9f27ac8f647a7efe3c5a35"} Jan 21 00:07:44 crc kubenswrapper[4873]: I0121 00:07:44.616597 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" event={"ID":"d9d16caf-c710-4169-b406-8071b78d2ca4","Type":"ContainerStarted","Data":"5d09d7cdaee0d7830039d6fce8f61997c5f32cf18f99bf6e179c4854d1628c79"} Jan 21 00:07:44 crc kubenswrapper[4873]: I0121 00:07:44.634775 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xqf94" podStartSLOduration=77.634743562 podStartE2EDuration="1m17.634743562s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:07:44.633572168 +0000 UTC m=+96.873439824" watchObservedRunningTime="2026-01-21 00:07:44.634743562 +0000 UTC m=+96.874611238" Jan 21 00:07:45 crc kubenswrapper[4873]: I0121 00:07:45.063086 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:45 crc kubenswrapper[4873]: E0121 00:07:45.063276 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:46 crc kubenswrapper[4873]: I0121 00:07:46.063331 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:46 crc kubenswrapper[4873]: E0121 00:07:46.063537 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:46 crc kubenswrapper[4873]: I0121 00:07:46.063362 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:46 crc kubenswrapper[4873]: E0121 00:07:46.063654 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:46 crc kubenswrapper[4873]: I0121 00:07:46.063342 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:46 crc kubenswrapper[4873]: E0121 00:07:46.063713 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:46 crc kubenswrapper[4873]: I0121 00:07:46.761085 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:46 crc kubenswrapper[4873]: E0121 00:07:46.761354 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:07:46 crc kubenswrapper[4873]: E0121 00:07:46.761524 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs podName:c7f7e62f-ce78-4588-994f-8ab17d7821d1 nodeName:}" failed. No retries permitted until 2026-01-21 00:08:50.761478286 +0000 UTC m=+163.001345962 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs") pod "network-metrics-daemon-mx2js" (UID: "c7f7e62f-ce78-4588-994f-8ab17d7821d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:07:47 crc kubenswrapper[4873]: I0121 00:07:47.063405 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:47 crc kubenswrapper[4873]: I0121 00:07:47.064528 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:07:47 crc kubenswrapper[4873]: E0121 00:07:47.064537 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:47 crc kubenswrapper[4873]: E0121 00:07:47.064833 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" Jan 21 00:07:48 crc kubenswrapper[4873]: I0121 00:07:48.062845 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:48 crc kubenswrapper[4873]: I0121 00:07:48.062890 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:48 crc kubenswrapper[4873]: I0121 00:07:48.062861 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:48 crc kubenswrapper[4873]: E0121 00:07:48.064141 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:48 crc kubenswrapper[4873]: E0121 00:07:48.064193 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:48 crc kubenswrapper[4873]: E0121 00:07:48.064248 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:49 crc kubenswrapper[4873]: I0121 00:07:49.062692 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:49 crc kubenswrapper[4873]: E0121 00:07:49.063366 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:50 crc kubenswrapper[4873]: I0121 00:07:50.063355 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:50 crc kubenswrapper[4873]: I0121 00:07:50.063607 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:50 crc kubenswrapper[4873]: E0121 00:07:50.063628 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:50 crc kubenswrapper[4873]: I0121 00:07:50.063725 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:50 crc kubenswrapper[4873]: E0121 00:07:50.064040 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:50 crc kubenswrapper[4873]: E0121 00:07:50.064175 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:51 crc kubenswrapper[4873]: I0121 00:07:51.063424 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:51 crc kubenswrapper[4873]: E0121 00:07:51.063929 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:52 crc kubenswrapper[4873]: I0121 00:07:52.062873 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:52 crc kubenswrapper[4873]: I0121 00:07:52.062907 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:52 crc kubenswrapper[4873]: I0121 00:07:52.062993 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:52 crc kubenswrapper[4873]: E0121 00:07:52.063179 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:52 crc kubenswrapper[4873]: E0121 00:07:52.063370 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:52 crc kubenswrapper[4873]: E0121 00:07:52.063517 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:53 crc kubenswrapper[4873]: I0121 00:07:53.063099 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:53 crc kubenswrapper[4873]: E0121 00:07:53.063277 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:54 crc kubenswrapper[4873]: I0121 00:07:54.063324 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:54 crc kubenswrapper[4873]: I0121 00:07:54.063329 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:54 crc kubenswrapper[4873]: I0121 00:07:54.063490 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:54 crc kubenswrapper[4873]: E0121 00:07:54.063773 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:54 crc kubenswrapper[4873]: E0121 00:07:54.063860 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:54 crc kubenswrapper[4873]: E0121 00:07:54.063673 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:55 crc kubenswrapper[4873]: I0121 00:07:55.062513 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:55 crc kubenswrapper[4873]: E0121 00:07:55.062785 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:56 crc kubenswrapper[4873]: I0121 00:07:56.062585 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:56 crc kubenswrapper[4873]: I0121 00:07:56.062695 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:56 crc kubenswrapper[4873]: E0121 00:07:56.062792 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:56 crc kubenswrapper[4873]: E0121 00:07:56.062925 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:56 crc kubenswrapper[4873]: I0121 00:07:56.063050 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:56 crc kubenswrapper[4873]: E0121 00:07:56.063202 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:57 crc kubenswrapper[4873]: I0121 00:07:57.063622 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:57 crc kubenswrapper[4873]: E0121 00:07:57.063867 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:07:58 crc kubenswrapper[4873]: I0121 00:07:58.062517 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:07:58 crc kubenswrapper[4873]: I0121 00:07:58.062517 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:07:58 crc kubenswrapper[4873]: I0121 00:07:58.062688 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:07:58 crc kubenswrapper[4873]: E0121 00:07:58.064901 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:07:58 crc kubenswrapper[4873]: E0121 00:07:58.065051 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:07:58 crc kubenswrapper[4873]: E0121 00:07:58.065317 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:07:59 crc kubenswrapper[4873]: I0121 00:07:59.063483 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:07:59 crc kubenswrapper[4873]: E0121 00:07:59.063994 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:00 crc kubenswrapper[4873]: I0121 00:08:00.062960 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:00 crc kubenswrapper[4873]: I0121 00:08:00.062974 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:00 crc kubenswrapper[4873]: I0121 00:08:00.063132 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:00 crc kubenswrapper[4873]: E0121 00:08:00.063359 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:08:00 crc kubenswrapper[4873]: E0121 00:08:00.063645 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:08:00 crc kubenswrapper[4873]: E0121 00:08:00.063777 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:08:01 crc kubenswrapper[4873]: I0121 00:08:01.062455 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:01 crc kubenswrapper[4873]: E0121 00:08:01.062729 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:01 crc kubenswrapper[4873]: I0121 00:08:01.063826 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:08:01 crc kubenswrapper[4873]: E0121 00:08:01.064207 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbp72_openshift-ovn-kubernetes(12879027-cbf4-4393-a71e-2a42d8c9f0fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" Jan 21 00:08:01 crc kubenswrapper[4873]: I0121 00:08:01.680135 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/1.log" Jan 21 00:08:01 crc kubenswrapper[4873]: I0121 00:08:01.681274 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/0.log" Jan 21 00:08:01 crc kubenswrapper[4873]: I0121 00:08:01.681336 4873 generic.go:334] "Generic (PLEG): container finished" podID="fc2b4503-97f2-44cb-a1ad-e558df352294" containerID="739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6" exitCode=1 Jan 21 00:08:01 crc kubenswrapper[4873]: I0121 00:08:01.681404 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nfrvx" event={"ID":"fc2b4503-97f2-44cb-a1ad-e558df352294","Type":"ContainerDied","Data":"739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6"} Jan 21 00:08:01 crc kubenswrapper[4873]: I0121 00:08:01.681457 4873 scope.go:117] "RemoveContainer" containerID="4b93bbb8e11dc1f3b44b81e1441c62f6890773a9b7cf677d71fa2b9306750ba5" Jan 21 00:08:01 crc kubenswrapper[4873]: I0121 00:08:01.682234 4873 scope.go:117] "RemoveContainer" containerID="739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6" Jan 21 00:08:01 crc kubenswrapper[4873]: E0121 00:08:01.682526 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-nfrvx_openshift-multus(fc2b4503-97f2-44cb-a1ad-e558df352294)\"" pod="openshift-multus/multus-nfrvx" podUID="fc2b4503-97f2-44cb-a1ad-e558df352294" Jan 21 00:08:02 crc kubenswrapper[4873]: I0121 00:08:02.063159 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:02 crc kubenswrapper[4873]: I0121 00:08:02.063159 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:02 crc kubenswrapper[4873]: I0121 00:08:02.063284 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:02 crc kubenswrapper[4873]: E0121 00:08:02.063357 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:08:02 crc kubenswrapper[4873]: E0121 00:08:02.063478 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:08:02 crc kubenswrapper[4873]: E0121 00:08:02.063652 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:08:02 crc kubenswrapper[4873]: I0121 00:08:02.689151 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/1.log" Jan 21 00:08:03 crc kubenswrapper[4873]: I0121 00:08:03.062862 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:03 crc kubenswrapper[4873]: E0121 00:08:03.063011 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:04 crc kubenswrapper[4873]: I0121 00:08:04.062595 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:04 crc kubenswrapper[4873]: I0121 00:08:04.062722 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:04 crc kubenswrapper[4873]: I0121 00:08:04.062626 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:04 crc kubenswrapper[4873]: E0121 00:08:04.062980 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:08:04 crc kubenswrapper[4873]: E0121 00:08:04.062844 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:08:04 crc kubenswrapper[4873]: E0121 00:08:04.063124 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:08:05 crc kubenswrapper[4873]: I0121 00:08:05.062522 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:05 crc kubenswrapper[4873]: E0121 00:08:05.063224 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:06 crc kubenswrapper[4873]: I0121 00:08:06.063625 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:06 crc kubenswrapper[4873]: I0121 00:08:06.063698 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:06 crc kubenswrapper[4873]: I0121 00:08:06.063722 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:06 crc kubenswrapper[4873]: E0121 00:08:06.063839 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:08:06 crc kubenswrapper[4873]: E0121 00:08:06.063958 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:08:06 crc kubenswrapper[4873]: E0121 00:08:06.064101 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:08:07 crc kubenswrapper[4873]: I0121 00:08:07.063048 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:07 crc kubenswrapper[4873]: E0121 00:08:07.063266 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:08 crc kubenswrapper[4873]: I0121 00:08:08.062864 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:08 crc kubenswrapper[4873]: I0121 00:08:08.064147 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:08 crc kubenswrapper[4873]: E0121 00:08:08.068448 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:08:08 crc kubenswrapper[4873]: I0121 00:08:08.068602 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:08 crc kubenswrapper[4873]: E0121 00:08:08.069460 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:08:08 crc kubenswrapper[4873]: E0121 00:08:08.069587 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:08:08 crc kubenswrapper[4873]: E0121 00:08:08.094131 4873 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 00:08:08 crc kubenswrapper[4873]: E0121 00:08:08.161473 4873 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 00:08:09 crc kubenswrapper[4873]: I0121 00:08:09.062638 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:09 crc kubenswrapper[4873]: E0121 00:08:09.063174 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:10 crc kubenswrapper[4873]: I0121 00:08:10.063271 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:10 crc kubenswrapper[4873]: I0121 00:08:10.063346 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:10 crc kubenswrapper[4873]: E0121 00:08:10.063402 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:08:10 crc kubenswrapper[4873]: E0121 00:08:10.063533 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:08:10 crc kubenswrapper[4873]: I0121 00:08:10.064683 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:10 crc kubenswrapper[4873]: E0121 00:08:10.064830 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:08:11 crc kubenswrapper[4873]: I0121 00:08:11.062857 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:11 crc kubenswrapper[4873]: E0121 00:08:11.063112 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:12 crc kubenswrapper[4873]: I0121 00:08:12.063410 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:12 crc kubenswrapper[4873]: I0121 00:08:12.063459 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:12 crc kubenswrapper[4873]: E0121 00:08:12.063594 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:08:12 crc kubenswrapper[4873]: E0121 00:08:12.063720 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:08:12 crc kubenswrapper[4873]: I0121 00:08:12.063816 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:12 crc kubenswrapper[4873]: E0121 00:08:12.063914 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:08:13 crc kubenswrapper[4873]: I0121 00:08:13.062980 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:13 crc kubenswrapper[4873]: I0121 00:08:13.063941 4873 scope.go:117] "RemoveContainer" containerID="739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6" Jan 21 00:08:13 crc kubenswrapper[4873]: E0121 00:08:13.064674 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:13 crc kubenswrapper[4873]: I0121 00:08:13.065421 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:08:13 crc kubenswrapper[4873]: E0121 00:08:13.162327 4873 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 00:08:13 crc kubenswrapper[4873]: I0121 00:08:13.730127 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/1.log" Jan 21 00:08:13 crc kubenswrapper[4873]: I0121 00:08:13.730206 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nfrvx" event={"ID":"fc2b4503-97f2-44cb-a1ad-e558df352294","Type":"ContainerStarted","Data":"c869f9e6f90c252b9e52ba1e1dd55199aa3802419bf58c37706840c300b511b9"} Jan 21 00:08:13 crc kubenswrapper[4873]: I0121 00:08:13.754612 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/3.log" Jan 21 00:08:13 crc kubenswrapper[4873]: I0121 00:08:13.757512 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerStarted","Data":"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322"} Jan 21 00:08:13 crc kubenswrapper[4873]: I0121 00:08:13.758176 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:08:13 crc kubenswrapper[4873]: I0121 00:08:13.820915 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podStartSLOduration=106.82089936 podStartE2EDuration="1m46.82089936s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:13.820266675 +0000 UTC m=+126.060134341" watchObservedRunningTime="2026-01-21 00:08:13.82089936 +0000 UTC m=+126.060767006" Jan 21 00:08:14 crc kubenswrapper[4873]: I0121 00:08:14.045621 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mx2js"] Jan 21 00:08:14 crc kubenswrapper[4873]: I0121 00:08:14.045715 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:14 crc kubenswrapper[4873]: E0121 00:08:14.045801 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:14 crc kubenswrapper[4873]: I0121 00:08:14.062814 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:14 crc kubenswrapper[4873]: I0121 00:08:14.062900 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:14 crc kubenswrapper[4873]: I0121 00:08:14.062971 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:14 crc kubenswrapper[4873]: E0121 00:08:14.062962 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:08:14 crc kubenswrapper[4873]: E0121 00:08:14.063052 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:08:14 crc kubenswrapper[4873]: E0121 00:08:14.063326 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:08:16 crc kubenswrapper[4873]: I0121 00:08:16.062831 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:16 crc kubenswrapper[4873]: I0121 00:08:16.062943 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:16 crc kubenswrapper[4873]: I0121 00:08:16.062989 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:16 crc kubenswrapper[4873]: E0121 00:08:16.063163 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:16 crc kubenswrapper[4873]: I0121 00:08:16.063243 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:16 crc kubenswrapper[4873]: E0121 00:08:16.063364 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:08:16 crc kubenswrapper[4873]: E0121 00:08:16.063433 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:08:16 crc kubenswrapper[4873]: E0121 00:08:16.063575 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:08:18 crc kubenswrapper[4873]: I0121 00:08:18.065431 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:18 crc kubenswrapper[4873]: E0121 00:08:18.066777 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx2js" podUID="c7f7e62f-ce78-4588-994f-8ab17d7821d1" Jan 21 00:08:18 crc kubenswrapper[4873]: I0121 00:08:18.067062 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:18 crc kubenswrapper[4873]: I0121 00:08:18.067143 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:18 crc kubenswrapper[4873]: E0121 00:08:18.067234 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 00:08:18 crc kubenswrapper[4873]: I0121 00:08:18.067419 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:18 crc kubenswrapper[4873]: E0121 00:08:18.067516 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 00:08:18 crc kubenswrapper[4873]: E0121 00:08:18.067773 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 00:08:20 crc kubenswrapper[4873]: I0121 00:08:20.062507 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:20 crc kubenswrapper[4873]: I0121 00:08:20.062672 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:20 crc kubenswrapper[4873]: I0121 00:08:20.062906 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:20 crc kubenswrapper[4873]: I0121 00:08:20.062941 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:20 crc kubenswrapper[4873]: I0121 00:08:20.065903 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 00:08:20 crc kubenswrapper[4873]: I0121 00:08:20.066023 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 00:08:20 crc kubenswrapper[4873]: I0121 00:08:20.066414 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 00:08:20 crc kubenswrapper[4873]: I0121 00:08:20.066429 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 00:08:20 crc kubenswrapper[4873]: I0121 00:08:20.066984 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 00:08:20 crc kubenswrapper[4873]: I0121 00:08:20.067835 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 00:08:23 crc kubenswrapper[4873]: I0121 00:08:23.979443 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.017436 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n7fpv"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.018046 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.033327 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.033864 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.034263 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.034483 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.034726 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.033344 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.036057 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.036299 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.036824 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7hk8w"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.037003 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.037831 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.039357 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lh79l"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.039794 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.045472 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.046117 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.046644 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.046692 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.046863 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.052665 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.052783 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.052924 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.053430 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.053457 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.053463 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.053864 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29482560-5mcvl"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.053938 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.053515 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.053647 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.053770 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.054289 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29482560-5mcvl" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.057619 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-r2mwh"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.058412 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.061836 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-cznck"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.062602 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.063389 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.063465 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.063619 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.063640 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.063756 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.073321 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.073714 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.075706 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.078475 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.078996 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.079493 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.080139 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.080537 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.080700 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.085603 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.092706 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.092726 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.094802 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107508 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-etcd-serving-ca\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107602 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-image-import-ca\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107626 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-encryption-config\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107652 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-serving-cert\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107774 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107803 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-audit-dir\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107845 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-node-pullsecrets\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107868 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-etcd-client\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107899 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-audit\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107941 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxp4\" (UniqueName: \"kubernetes.io/projected/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-kube-api-access-kdxp4\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.107971 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-config\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.109264 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.109582 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.109977 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9gjlf"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.110250 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.112351 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.113013 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zffj8"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.113272 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.113318 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9gjlf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.115408 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sqq6z"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.115508 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.115580 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.115858 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.116185 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.116261 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.116282 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.116500 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.116707 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.115523 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.117208 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.123659 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.124753 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.126587 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.126934 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.126962 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.127219 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.127836 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.128065 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.128416 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.128532 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.128684 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.128910 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.128989 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.129006 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.129061 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.129113 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.129245 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.129374 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.129757 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.130023 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.130101 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.130695 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.131364 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.133514 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.134801 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7tbgp"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.135188 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.135349 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.139141 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.139739 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.140290 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.140715 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.141998 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.142870 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.147221 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.149691 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.162324 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.162789 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.162911 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.163048 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.163393 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.163651 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.163915 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.163948 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.163977 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.164077 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.164176 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.164212 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.165193 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.165576 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.166062 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.166379 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.166917 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bhsdr"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.167189 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.167639 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.167965 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.168585 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.171185 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.171964 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.176538 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.178254 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.180706 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jbpj7"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.182282 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.182317 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.182469 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.182587 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.183982 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.194571 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.196416 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-prftc"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.198877 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.200323 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.200856 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-flh6k"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.201226 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7d257"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.199960 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.201624 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.201823 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.201866 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.202355 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-s5hrf"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.202335 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.202985 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.203192 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.203526 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.203932 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.204427 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.204598 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.204701 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.204729 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.205474 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.208893 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4d6hg"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209583 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209602 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-client-ca\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209658 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-etcd-client\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209682 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-service-ca\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209733 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d07d6a1-689c-4fb5-af20-c2ee5f54273f-metrics-tls\") pod \"dns-operator-744455d44c-sqq6z\" (UID: \"1d07d6a1-689c-4fb5-af20-c2ee5f54273f\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209753 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-audit\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209769 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9250698c-3404-4a66-a9b6-286266f0e829-config\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209809 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e56ba3a-006a-4ae3-ae87-5c9babf78867-console-oauth-config\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209833 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5320d128-0e99-479c-bb4e-edf54938ea82-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ls7nr\" (UID: \"5320d128-0e99-479c-bb4e-edf54938ea82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209850 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgv4r\" (UniqueName: \"kubernetes.io/projected/4bdecaf8-534a-44fb-ad7a-224aacfc573b-kube-api-access-mgv4r\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209892 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdxp4\" (UniqueName: \"kubernetes.io/projected/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-kube-api-access-kdxp4\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209910 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml86k\" (UniqueName: \"kubernetes.io/projected/4358e352-a966-4c5c-9f6e-fb7f11616026-kube-api-access-ml86k\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209930 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e56ba3a-006a-4ae3-ae87-5c9babf78867-console-serving-cert\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.209965 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5e6054-8bdc-4431-914c-ba885604a20b-serving-cert\") pod \"openshift-config-operator-7777fb866f-ch9hg\" (UID: \"0a5e6054-8bdc-4431-914c-ba885604a20b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212000 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-audit\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212175 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt685\" (UniqueName: \"kubernetes.io/projected/efa5d9f4-85b1-4b35-9e6c-9c463462104f-kube-api-access-kt685\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212243 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bdecaf8-534a-44fb-ad7a-224aacfc573b-config\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212305 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9250698c-3404-4a66-a9b6-286266f0e829-images\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212327 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm658\" (UniqueName: \"kubernetes.io/projected/8afe39b0-54e9-4f29-98a3-52ec15ffcb6f-kube-api-access-sm658\") pod \"cluster-samples-operator-665b6dd947-m8l49\" (UID: \"8afe39b0-54e9-4f29-98a3-52ec15ffcb6f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212348 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4358e352-a966-4c5c-9f6e-fb7f11616026-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212388 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-auth-proxy-config\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212409 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bbf75ab-d6db-4c51-a850-bb383d55d2c3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xgml2\" (UID: \"1bbf75ab-d6db-4c51-a850-bb383d55d2c3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212431 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-encryption-config\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212470 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-serving-cert\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212491 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa5d9f4-85b1-4b35-9e6c-9c463462104f-serving-cert\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212508 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6krt\" (UniqueName: \"kubernetes.io/projected/8e08f95c-603b-4967-a545-b4cc31eeca6d-kube-api-access-r6krt\") pod \"image-pruner-29482560-5mcvl\" (UID: \"8e08f95c-603b-4967-a545-b4cc31eeca6d\") " pod="openshift-image-registry/image-pruner-29482560-5mcvl" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212637 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk4zm\" (UniqueName: \"kubernetes.io/projected/0a5e6054-8bdc-4431-914c-ba885604a20b-kube-api-access-qk4zm\") pod \"openshift-config-operator-7777fb866f-ch9hg\" (UID: \"0a5e6054-8bdc-4431-914c-ba885604a20b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212662 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-config\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212702 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-config\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212724 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212744 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dn7t\" (UniqueName: \"kubernetes.io/projected/9250698c-3404-4a66-a9b6-286266f0e829-kube-api-access-2dn7t\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212779 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-console-config\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212800 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jjz\" (UniqueName: \"kubernetes.io/projected/1bbf75ab-d6db-4c51-a850-bb383d55d2c3-kube-api-access-z9jjz\") pod \"openshift-apiserver-operator-796bbdcf4f-xgml2\" (UID: \"1bbf75ab-d6db-4c51-a850-bb383d55d2c3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212824 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-audit-dir\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212863 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bdecaf8-534a-44fb-ad7a-224aacfc573b-serving-cert\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212884 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8afe39b0-54e9-4f29-98a3-52ec15ffcb6f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-m8l49\" (UID: \"8afe39b0-54e9-4f29-98a3-52ec15ffcb6f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212905 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-node-pullsecrets\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.212967 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr684\" (UniqueName: \"kubernetes.io/projected/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-kube-api-access-lr684\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213020 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msxbr\" (UniqueName: \"kubernetes.io/projected/1d07d6a1-689c-4fb5-af20-c2ee5f54273f-kube-api-access-msxbr\") pod \"dns-operator-744455d44c-sqq6z\" (UID: \"1d07d6a1-689c-4fb5-af20-c2ee5f54273f\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213061 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9250698c-3404-4a66-a9b6-286266f0e829-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213122 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4358e352-a966-4c5c-9f6e-fb7f11616026-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213147 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4358e352-a966-4c5c-9f6e-fb7f11616026-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213200 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-config\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213224 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5320d128-0e99-479c-bb4e-edf54938ea82-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ls7nr\" (UID: \"5320d128-0e99-479c-bb4e-edf54938ea82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213291 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbbjr\" (UniqueName: \"kubernetes.io/projected/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-kube-api-access-xbbjr\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213360 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-config\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213390 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-etcd-serving-ca\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213449 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213484 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a5e6054-8bdc-4431-914c-ba885604a20b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ch9hg\" (UID: \"0a5e6054-8bdc-4431-914c-ba885604a20b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213535 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hwq7\" (UniqueName: \"kubernetes.io/projected/110863cb-5af1-4e46-9e40-9831dfa25875-kube-api-access-6hwq7\") pod \"downloads-7954f5f757-9gjlf\" (UID: \"110863cb-5af1-4e46-9e40-9831dfa25875\") " pod="openshift-console/downloads-7954f5f757-9gjlf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213599 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bdecaf8-534a-44fb-ad7a-224aacfc573b-trusted-ca\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213624 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-trusted-ca-bundle\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213623 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213676 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-machine-approver-tls\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213716 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-audit-dir\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213750 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-image-import-ca\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.213945 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-serving-cert\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.214012 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-service-ca-bundle\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.214042 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbf75ab-d6db-4c51-a850-bb383d55d2c3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xgml2\" (UID: \"1bbf75ab-d6db-4c51-a850-bb383d55d2c3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.214238 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c999d\" (UniqueName: \"kubernetes.io/projected/4e56ba3a-006a-4ae3-ae87-5c9babf78867-kube-api-access-c999d\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.214265 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.214286 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-config\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.214356 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lflq2\" (UniqueName: \"kubernetes.io/projected/5320d128-0e99-479c-bb4e-edf54938ea82-kube-api-access-lflq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-ls7nr\" (UID: \"5320d128-0e99-479c-bb4e-edf54938ea82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.214479 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-oauth-serving-cert\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.214533 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-node-pullsecrets\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.214617 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e08f95c-603b-4967-a545-b4cc31eeca6d-serviceca\") pod \"image-pruner-29482560-5mcvl\" (UID: \"8e08f95c-603b-4967-a545-b4cc31eeca6d\") " pod="openshift-image-registry/image-pruner-29482560-5mcvl" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.215010 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-etcd-serving-ca\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.215198 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.215527 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-image-import-ca\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.216154 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.216769 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.218420 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-encryption-config\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.218485 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.219224 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.220213 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.220351 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.220829 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.223312 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.223986 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.224059 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.224855 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.224929 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-serving-cert\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.224934 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.225414 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.225678 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.226013 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-etcd-client\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.226171 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.226986 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mx25s"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.227887 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.228099 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n7fpv"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.230134 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.230527 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.232913 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lh79l"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.234258 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7hk8w"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.235584 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-r2mwh"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.236761 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9gjlf"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.237969 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.239034 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zffj8"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.240260 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.251936 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.254340 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.255482 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.256507 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29482560-5mcvl"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.257524 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qqg7r"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.258126 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.258507 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9d5lf"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.259233 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9d5lf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.259646 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bhsdr"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.260310 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.261247 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sqq6z"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.262348 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.263738 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.265020 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.266218 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7d257"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.268494 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.269922 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.286087 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.294102 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.297468 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4d6hg"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.297540 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.297654 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.299504 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.300487 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jbpj7"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.301670 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9d5lf"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.303389 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.304924 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.304966 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.306080 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.307180 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.308263 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-flh6k"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.309308 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7tbgp"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.311875 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-prftc"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.311905 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.312879 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.314958 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.314989 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mx25s"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.315502 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632798b6-480c-42d9-a549-7b2e8a87b1e2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mx25s\" (UID: \"632798b6-480c-42d9-a549-7b2e8a87b1e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.315536 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-proxy-tls\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.315579 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c5454c-78f0-4b3e-a5d8-50319b08070d-serving-cert\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.315603 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94wwf\" (UniqueName: \"kubernetes.io/projected/d8c5454c-78f0-4b3e-a5d8-50319b08070d-kube-api-access-94wwf\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.315636 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5320d128-0e99-479c-bb4e-edf54938ea82-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ls7nr\" (UID: \"5320d128-0e99-479c-bb4e-edf54938ea82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.315663 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ad45b560-df33-48b7-ad41-f2b562b6b682-srv-cert\") pod \"olm-operator-6b444d44fb-jv27r\" (UID: \"ad45b560-df33-48b7-ad41-f2b562b6b682\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.315685 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d659819a-df3b-470c-822e-45864643cffa-tmpfs\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.315710 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.315741 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e56ba3a-006a-4ae3-ae87-5c9babf78867-console-serving-cert\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.316134 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml86k\" (UniqueName: \"kubernetes.io/projected/4358e352-a966-4c5c-9f6e-fb7f11616026-kube-api-access-ml86k\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.316281 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.316681 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5320d128-0e99-479c-bb4e-edf54938ea82-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ls7nr\" (UID: \"5320d128-0e99-479c-bb4e-edf54938ea82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317345 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/484fd929-7fa0-4ab0-95d0-fbbb122c255a-etcd-ca\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317388 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt685\" (UniqueName: \"kubernetes.io/projected/efa5d9f4-85b1-4b35-9e6c-9c463462104f-kube-api-access-kt685\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317410 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d659819a-df3b-470c-822e-45864643cffa-apiservice-cert\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317435 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9250698c-3404-4a66-a9b6-286266f0e829-images\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317457 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bbf75ab-d6db-4c51-a850-bb383d55d2c3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xgml2\" (UID: \"1bbf75ab-d6db-4c51-a850-bb383d55d2c3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317484 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk4zm\" (UniqueName: \"kubernetes.io/projected/0a5e6054-8bdc-4431-914c-ba885604a20b-kube-api-access-qk4zm\") pod \"openshift-config-operator-7777fb866f-ch9hg\" (UID: \"0a5e6054-8bdc-4431-914c-ba885604a20b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317503 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317528 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-config\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317564 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbkpj\" (UniqueName: \"kubernetes.io/projected/ad45b560-df33-48b7-ad41-f2b562b6b682-kube-api-access-wbkpj\") pod \"olm-operator-6b444d44fb-jv27r\" (UID: \"ad45b560-df33-48b7-ad41-f2b562b6b682\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317587 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dn7t\" (UniqueName: \"kubernetes.io/projected/9250698c-3404-4a66-a9b6-286266f0e829-kube-api-access-2dn7t\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317609 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9676863-bb22-4886-bed1-f22b6aa37f90-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4hj68\" (UID: \"d9676863-bb22-4886-bed1-f22b6aa37f90\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317629 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484fd929-7fa0-4ab0-95d0-fbbb122c255a-config\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317712 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr9qf\" (UniqueName: \"kubernetes.io/projected/a87004d0-0618-4970-93d8-14326161f16a-kube-api-access-sr9qf\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317785 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srxxt\" (UniqueName: \"kubernetes.io/projected/484fd929-7fa0-4ab0-95d0-fbbb122c255a-kube-api-access-srxxt\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317827 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317857 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lglfc\" (UniqueName: \"kubernetes.io/projected/98561363-dd26-4b91-86a3-61f13886ce2c-kube-api-access-lglfc\") pod \"package-server-manager-789f6589d5-5jfsf\" (UID: \"98561363-dd26-4b91-86a3-61f13886ce2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317887 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bdecaf8-534a-44fb-ad7a-224aacfc573b-serving-cert\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317913 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/12926866-b4b6-4c68-b010-e30359824005-default-certificate\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317934 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pnst\" (UniqueName: \"kubernetes.io/projected/12926866-b4b6-4c68-b010-e30359824005-kube-api-access-9pnst\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317955 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msxbr\" (UniqueName: \"kubernetes.io/projected/1d07d6a1-689c-4fb5-af20-c2ee5f54273f-kube-api-access-msxbr\") pod \"dns-operator-744455d44c-sqq6z\" (UID: \"1d07d6a1-689c-4fb5-af20-c2ee5f54273f\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.317973 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/484fd929-7fa0-4ab0-95d0-fbbb122c255a-etcd-client\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318039 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9250698c-3404-4a66-a9b6-286266f0e829-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318078 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4358e352-a966-4c5c-9f6e-fb7f11616026-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318255 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvb7h\" (UniqueName: \"kubernetes.io/projected/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-kube-api-access-kvb7h\") pod \"collect-profiles-29482560-xgsrm\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318296 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntlwk\" (UniqueName: \"kubernetes.io/projected/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-kube-api-access-ntlwk\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318327 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318358 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5320d128-0e99-479c-bb4e-edf54938ea82-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ls7nr\" (UID: \"5320d128-0e99-479c-bb4e-edf54938ea82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318386 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbbjr\" (UniqueName: \"kubernetes.io/projected/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-kube-api-access-xbbjr\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318382 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9250698c-3404-4a66-a9b6-286266f0e829-images\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318417 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9676863-bb22-4886-bed1-f22b6aa37f90-proxy-tls\") pod \"machine-config-controller-84d6567774-4hj68\" (UID: \"d9676863-bb22-4886-bed1-f22b6aa37f90\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318443 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8837b547-757d-45f5-b85b-d800484f3d07-profile-collector-cert\") pod \"catalog-operator-68c6474976-6hltn\" (UID: \"8837b547-757d-45f5-b85b-d800484f3d07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318469 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8837b547-757d-45f5-b85b-d800484f3d07-srv-cert\") pod \"catalog-operator-68c6474976-6hltn\" (UID: \"8837b547-757d-45f5-b85b-d800484f3d07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318497 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hwq7\" (UniqueName: \"kubernetes.io/projected/110863cb-5af1-4e46-9e40-9831dfa25875-kube-api-access-6hwq7\") pod \"downloads-7954f5f757-9gjlf\" (UID: \"110863cb-5af1-4e46-9e40-9831dfa25875\") " pod="openshift-console/downloads-7954f5f757-9gjlf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318354 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-config\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318527 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-client-ca\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318577 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-machine-approver-tls\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318603 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/12926866-b4b6-4c68-b010-e30359824005-stats-auth\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318669 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdd8df50-7130-49f2-b68f-6cb7c95a45de-metrics-tls\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318709 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c999d\" (UniqueName: \"kubernetes.io/projected/4e56ba3a-006a-4ae3-ae87-5c9babf78867-kube-api-access-c999d\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318732 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lflq2\" (UniqueName: \"kubernetes.io/projected/5320d128-0e99-479c-bb4e-edf54938ea82-kube-api-access-lflq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-ls7nr\" (UID: \"5320d128-0e99-479c-bb4e-edf54938ea82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318756 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-service-ca-bundle\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318784 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbf75ab-d6db-4c51-a850-bb383d55d2c3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xgml2\" (UID: \"1bbf75ab-d6db-4c51-a850-bb383d55d2c3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318811 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbl99\" (UniqueName: \"kubernetes.io/projected/aeb6b18b-cc94-4a8a-a970-e39447726765-kube-api-access-xbl99\") pod \"migrator-59844c95c7-r9ljc\" (UID: \"aeb6b18b-cc94-4a8a-a970-e39447726765\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318838 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12926866-b4b6-4c68-b010-e30359824005-service-ca-bundle\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318863 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-oauth-serving-cert\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318890 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d07d6a1-689c-4fb5-af20-c2ee5f54273f-metrics-tls\") pod \"dns-operator-744455d44c-sqq6z\" (UID: \"1d07d6a1-689c-4fb5-af20-c2ee5f54273f\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318915 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9250698c-3404-4a66-a9b6-286266f0e829-config\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.319631 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-service-ca-bundle\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.319858 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e56ba3a-006a-4ae3-ae87-5c9babf78867-console-serving-cert\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.318940 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ad45b560-df33-48b7-ad41-f2b562b6b682-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jv27r\" (UID: \"ad45b560-df33-48b7-ad41-f2b562b6b682\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320191 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e56ba3a-006a-4ae3-ae87-5c9babf78867-console-oauth-config\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320211 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320232 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a87004d0-0618-4970-93d8-14326161f16a-serving-cert\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320255 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbf75ab-d6db-4c51-a850-bb383d55d2c3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xgml2\" (UID: \"1bbf75ab-d6db-4c51-a850-bb383d55d2c3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320363 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgv4r\" (UniqueName: \"kubernetes.io/projected/4bdecaf8-534a-44fb-ad7a-224aacfc573b-kube-api-access-mgv4r\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320396 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5e6054-8bdc-4431-914c-ba885604a20b-serving-cert\") pod \"openshift-config-operator-7777fb866f-ch9hg\" (UID: \"0a5e6054-8bdc-4431-914c-ba885604a20b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320424 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320451 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a87004d0-0618-4970-93d8-14326161f16a-config\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320476 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-dir\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320498 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bdecaf8-534a-44fb-ad7a-224aacfc573b-config\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320519 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14771d68-d459-4b1c-88a4-4a83a3141db4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wqg68\" (UID: \"14771d68-d459-4b1c-88a4-4a83a3141db4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320542 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-images\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320586 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-auth-proxy-config\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320852 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-q2stp"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.320978 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-oauth-serving-cert\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.321682 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bbf75ab-d6db-4c51-a850-bb383d55d2c3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xgml2\" (UID: \"1bbf75ab-d6db-4c51-a850-bb383d55d2c3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.321727 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm658\" (UniqueName: \"kubernetes.io/projected/8afe39b0-54e9-4f29-98a3-52ec15ffcb6f-kube-api-access-sm658\") pod \"cluster-samples-operator-665b6dd947-m8l49\" (UID: \"8afe39b0-54e9-4f29-98a3-52ec15ffcb6f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.321785 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4358e352-a966-4c5c-9f6e-fb7f11616026-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.321823 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrsrj\" (UniqueName: \"kubernetes.io/projected/fa9a91c2-efee-4e59-acaa-c5f236e0f857-kube-api-access-jrsrj\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.321852 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa5d9f4-85b1-4b35-9e6c-9c463462104f-serving-cert\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.321886 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6krt\" (UniqueName: \"kubernetes.io/projected/8e08f95c-603b-4967-a545-b4cc31eeca6d-kube-api-access-r6krt\") pod \"image-pruner-29482560-5mcvl\" (UID: \"8e08f95c-603b-4967-a545-b4cc31eeca6d\") " pod="openshift-image-registry/image-pruner-29482560-5mcvl" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.321926 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.321960 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.321992 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-config\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322022 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/484fd929-7fa0-4ab0-95d0-fbbb122c255a-etcd-service-ca\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322048 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-console-config\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322074 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9jjz\" (UniqueName: \"kubernetes.io/projected/1bbf75ab-d6db-4c51-a850-bb383d55d2c3-kube-api-access-z9jjz\") pod \"openshift-apiserver-operator-796bbdcf4f-xgml2\" (UID: \"1bbf75ab-d6db-4c51-a850-bb383d55d2c3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322101 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdd8df50-7130-49f2-b68f-6cb7c95a45de-trusted-ca\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322130 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14771d68-d459-4b1c-88a4-4a83a3141db4-config\") pod \"kube-controller-manager-operator-78b949d7b-wqg68\" (UID: \"14771d68-d459-4b1c-88a4-4a83a3141db4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322152 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8afe39b0-54e9-4f29-98a3-52ec15ffcb6f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-m8l49\" (UID: \"8afe39b0-54e9-4f29-98a3-52ec15ffcb6f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322177 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-config-volume\") pod \"collect-profiles-29482560-xgsrm\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322207 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr684\" (UniqueName: \"kubernetes.io/projected/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-kube-api-access-lr684\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322243 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/62c1a236-f69b-401a-9340-c57b301f0657-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-z8d7g\" (UID: \"62c1a236-f69b-401a-9340-c57b301f0657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322286 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptqx4\" (UniqueName: \"kubernetes.io/projected/d9676863-bb22-4886-bed1-f22b6aa37f90-kube-api-access-ptqx4\") pod \"machine-config-controller-84d6567774-4hj68\" (UID: \"d9676863-bb22-4886-bed1-f22b6aa37f90\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322314 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-policies\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322348 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322585 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9250698c-3404-4a66-a9b6-286266f0e829-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322644 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4358e352-a966-4c5c-9f6e-fb7f11616026-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322673 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322720 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftvzt\" (UniqueName: \"kubernetes.io/projected/d6947adf-bb59-4009-b65e-76b873d5fb18-kube-api-access-ftvzt\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6qmv\" (UID: \"d6947adf-bb59-4009-b65e-76b873d5fb18\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322751 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322786 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lpv7\" (UniqueName: \"kubernetes.io/projected/8837b547-757d-45f5-b85b-d800484f3d07-kube-api-access-7lpv7\") pod \"catalog-operator-68c6474976-6hltn\" (UID: \"8837b547-757d-45f5-b85b-d800484f3d07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322825 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4358e352-a966-4c5c-9f6e-fb7f11616026-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322865 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-config\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322897 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fdd8df50-7130-49f2-b68f-6cb7c95a45de-bound-sa-token\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322928 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqsg\" (UniqueName: \"kubernetes.io/projected/fdd8df50-7130-49f2-b68f-6cb7c95a45de-kube-api-access-cwqsg\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322961 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322988 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9250698c-3404-4a66-a9b6-286266f0e829-config\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.322997 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323070 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a5e6054-8bdc-4431-914c-ba885604a20b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ch9hg\" (UID: \"0a5e6054-8bdc-4431-914c-ba885604a20b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323108 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-secret-volume\") pod \"collect-profiles-29482560-xgsrm\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323240 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bdecaf8-534a-44fb-ad7a-224aacfc573b-trusted-ca\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323276 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-trusted-ca-bundle\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323295 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-console-config\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323307 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12926866-b4b6-4c68-b010-e30359824005-metrics-certs\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323348 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-serving-cert\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323380 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6947adf-bb59-4009-b65e-76b873d5fb18-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6qmv\" (UID: \"d6947adf-bb59-4009-b65e-76b873d5fb18\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323409 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thkc9\" (UniqueName: \"kubernetes.io/projected/62c1a236-f69b-401a-9340-c57b301f0657-kube-api-access-thkc9\") pod \"control-plane-machine-set-operator-78cbb6b69f-z8d7g\" (UID: \"62c1a236-f69b-401a-9340-c57b301f0657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323439 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-config\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323508 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d659819a-df3b-470c-822e-45864643cffa-webhook-cert\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323544 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7hn\" (UniqueName: \"kubernetes.io/projected/632798b6-480c-42d9-a549-7b2e8a87b1e2-kube-api-access-hw7hn\") pod \"multus-admission-controller-857f4d67dd-mx25s\" (UID: \"632798b6-480c-42d9-a549-7b2e8a87b1e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323589 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323613 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bdecaf8-534a-44fb-ad7a-224aacfc573b-config\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323621 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e08f95c-603b-4967-a545-b4cc31eeca6d-serviceca\") pod \"image-pruner-29482560-5mcvl\" (UID: \"8e08f95c-603b-4967-a545-b4cc31eeca6d\") " pod="openshift-image-registry/image-pruner-29482560-5mcvl" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323707 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14771d68-d459-4b1c-88a4-4a83a3141db4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wqg68\" (UID: \"14771d68-d459-4b1c-88a4-4a83a3141db4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323752 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-client-ca\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.323799 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98561363-dd26-4b91-86a3-61f13886ce2c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5jfsf\" (UID: \"98561363-dd26-4b91-86a3-61f13886ce2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.324016 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6947adf-bb59-4009-b65e-76b873d5fb18-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6qmv\" (UID: \"d6947adf-bb59-4009-b65e-76b873d5fb18\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.324064 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxsnq\" (UniqueName: \"kubernetes.io/projected/d659819a-df3b-470c-822e-45864643cffa-kube-api-access-hxsnq\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.324183 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-service-ca\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.324245 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/484fd929-7fa0-4ab0-95d0-fbbb122c255a-serving-cert\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.324396 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e08f95c-603b-4967-a545-b4cc31eeca6d-serviceca\") pod \"image-pruner-29482560-5mcvl\" (UID: \"8e08f95c-603b-4967-a545-b4cc31eeca6d\") " pod="openshift-image-registry/image-pruner-29482560-5mcvl" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.325107 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.325689 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-client-ca\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.325727 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-config\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.325935 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4358e352-a966-4c5c-9f6e-fb7f11616026-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.326064 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a5e6054-8bdc-4431-914c-ba885604a20b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ch9hg\" (UID: \"0a5e6054-8bdc-4431-914c-ba885604a20b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.326274 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5e6054-8bdc-4431-914c-ba885604a20b-serving-cert\") pod \"openshift-config-operator-7777fb866f-ch9hg\" (UID: \"0a5e6054-8bdc-4431-914c-ba885604a20b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.326816 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-service-ca\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.327413 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d07d6a1-689c-4fb5-af20-c2ee5f54273f-metrics-tls\") pod \"dns-operator-744455d44c-sqq6z\" (UID: \"1d07d6a1-689c-4fb5-af20-c2ee5f54273f\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.327580 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-config\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.327699 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-auth-proxy-config\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.328195 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5320d128-0e99-479c-bb4e-edf54938ea82-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ls7nr\" (UID: \"5320d128-0e99-479c-bb4e-edf54938ea82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.328471 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bdecaf8-534a-44fb-ad7a-224aacfc573b-trusted-ca\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.328478 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bdecaf8-534a-44fb-ad7a-224aacfc573b-serving-cert\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.329098 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.329293 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-72s9c"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.329412 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.330858 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e56ba3a-006a-4ae3-ae87-5c9babf78867-trusted-ca-bundle\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.332273 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-serving-cert\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.332345 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q2stp"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.332369 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-72s9c"] Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.332466 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.332807 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa5d9f4-85b1-4b35-9e6c-9c463462104f-serving-cert\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.334119 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-machine-approver-tls\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.334230 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4e56ba3a-006a-4ae3-ae87-5c9babf78867-console-oauth-config\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.336131 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8afe39b0-54e9-4f29-98a3-52ec15ffcb6f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-m8l49\" (UID: \"8afe39b0-54e9-4f29-98a3-52ec15ffcb6f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.340034 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.360904 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.387279 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.400121 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.421126 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425311 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptqx4\" (UniqueName: \"kubernetes.io/projected/d9676863-bb22-4886-bed1-f22b6aa37f90-kube-api-access-ptqx4\") pod \"machine-config-controller-84d6567774-4hj68\" (UID: \"d9676863-bb22-4886-bed1-f22b6aa37f90\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425342 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-policies\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425362 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425383 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425400 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lpv7\" (UniqueName: \"kubernetes.io/projected/8837b547-757d-45f5-b85b-d800484f3d07-kube-api-access-7lpv7\") pod \"catalog-operator-68c6474976-6hltn\" (UID: \"8837b547-757d-45f5-b85b-d800484f3d07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425418 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftvzt\" (UniqueName: \"kubernetes.io/projected/d6947adf-bb59-4009-b65e-76b873d5fb18-kube-api-access-ftvzt\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6qmv\" (UID: \"d6947adf-bb59-4009-b65e-76b873d5fb18\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425433 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425455 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fdd8df50-7130-49f2-b68f-6cb7c95a45de-bound-sa-token\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425472 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwqsg\" (UniqueName: \"kubernetes.io/projected/fdd8df50-7130-49f2-b68f-6cb7c95a45de-kube-api-access-cwqsg\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425490 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425538 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-secret-volume\") pod \"collect-profiles-29482560-xgsrm\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425591 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12926866-b4b6-4c68-b010-e30359824005-metrics-certs\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425623 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6947adf-bb59-4009-b65e-76b873d5fb18-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6qmv\" (UID: \"d6947adf-bb59-4009-b65e-76b873d5fb18\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425645 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-config\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425669 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thkc9\" (UniqueName: \"kubernetes.io/projected/62c1a236-f69b-401a-9340-c57b301f0657-kube-api-access-thkc9\") pod \"control-plane-machine-set-operator-78cbb6b69f-z8d7g\" (UID: \"62c1a236-f69b-401a-9340-c57b301f0657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425697 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425722 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d659819a-df3b-470c-822e-45864643cffa-webhook-cert\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425749 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7hn\" (UniqueName: \"kubernetes.io/projected/632798b6-480c-42d9-a549-7b2e8a87b1e2-kube-api-access-hw7hn\") pod \"multus-admission-controller-857f4d67dd-mx25s\" (UID: \"632798b6-480c-42d9-a549-7b2e8a87b1e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425776 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14771d68-d459-4b1c-88a4-4a83a3141db4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wqg68\" (UID: \"14771d68-d459-4b1c-88a4-4a83a3141db4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425802 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6947adf-bb59-4009-b65e-76b873d5fb18-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6qmv\" (UID: \"d6947adf-bb59-4009-b65e-76b873d5fb18\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425830 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxsnq\" (UniqueName: \"kubernetes.io/projected/d659819a-df3b-470c-822e-45864643cffa-kube-api-access-hxsnq\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425857 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98561363-dd26-4b91-86a3-61f13886ce2c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5jfsf\" (UID: \"98561363-dd26-4b91-86a3-61f13886ce2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425883 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/484fd929-7fa0-4ab0-95d0-fbbb122c255a-serving-cert\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425922 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632798b6-480c-42d9-a549-7b2e8a87b1e2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mx25s\" (UID: \"632798b6-480c-42d9-a549-7b2e8a87b1e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425948 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-proxy-tls\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.425974 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c5454c-78f0-4b3e-a5d8-50319b08070d-serving-cert\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426002 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94wwf\" (UniqueName: \"kubernetes.io/projected/d8c5454c-78f0-4b3e-a5d8-50319b08070d-kube-api-access-94wwf\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426029 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d659819a-df3b-470c-822e-45864643cffa-tmpfs\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426056 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ad45b560-df33-48b7-ad41-f2b562b6b682-srv-cert\") pod \"olm-operator-6b444d44fb-jv27r\" (UID: \"ad45b560-df33-48b7-ad41-f2b562b6b682\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426082 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426107 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/484fd929-7fa0-4ab0-95d0-fbbb122c255a-etcd-ca\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426140 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d659819a-df3b-470c-822e-45864643cffa-apiservice-cert\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426185 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426213 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbkpj\" (UniqueName: \"kubernetes.io/projected/ad45b560-df33-48b7-ad41-f2b562b6b682-kube-api-access-wbkpj\") pod \"olm-operator-6b444d44fb-jv27r\" (UID: \"ad45b560-df33-48b7-ad41-f2b562b6b682\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426243 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484fd929-7fa0-4ab0-95d0-fbbb122c255a-config\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426265 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr9qf\" (UniqueName: \"kubernetes.io/projected/a87004d0-0618-4970-93d8-14326161f16a-kube-api-access-sr9qf\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426288 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9676863-bb22-4886-bed1-f22b6aa37f90-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4hj68\" (UID: \"d9676863-bb22-4886-bed1-f22b6aa37f90\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426311 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lglfc\" (UniqueName: \"kubernetes.io/projected/98561363-dd26-4b91-86a3-61f13886ce2c-kube-api-access-lglfc\") pod \"package-server-manager-789f6589d5-5jfsf\" (UID: \"98561363-dd26-4b91-86a3-61f13886ce2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426333 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srxxt\" (UniqueName: \"kubernetes.io/projected/484fd929-7fa0-4ab0-95d0-fbbb122c255a-kube-api-access-srxxt\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426358 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426383 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pnst\" (UniqueName: \"kubernetes.io/projected/12926866-b4b6-4c68-b010-e30359824005-kube-api-access-9pnst\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426407 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/12926866-b4b6-4c68-b010-e30359824005-default-certificate\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426421 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d659819a-df3b-470c-822e-45864643cffa-tmpfs\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426439 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/484fd929-7fa0-4ab0-95d0-fbbb122c255a-etcd-client\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426487 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9676863-bb22-4886-bed1-f22b6aa37f90-proxy-tls\") pod \"machine-config-controller-84d6567774-4hj68\" (UID: \"d9676863-bb22-4886-bed1-f22b6aa37f90\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426513 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvb7h\" (UniqueName: \"kubernetes.io/projected/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-kube-api-access-kvb7h\") pod \"collect-profiles-29482560-xgsrm\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426542 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntlwk\" (UniqueName: \"kubernetes.io/projected/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-kube-api-access-ntlwk\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426612 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426637 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8837b547-757d-45f5-b85b-d800484f3d07-profile-collector-cert\") pod \"catalog-operator-68c6474976-6hltn\" (UID: \"8837b547-757d-45f5-b85b-d800484f3d07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426668 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-client-ca\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426692 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8837b547-757d-45f5-b85b-d800484f3d07-srv-cert\") pod \"catalog-operator-68c6474976-6hltn\" (UID: \"8837b547-757d-45f5-b85b-d800484f3d07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426719 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/12926866-b4b6-4c68-b010-e30359824005-stats-auth\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426745 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdd8df50-7130-49f2-b68f-6cb7c95a45de-metrics-tls\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426796 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbl99\" (UniqueName: \"kubernetes.io/projected/aeb6b18b-cc94-4a8a-a970-e39447726765-kube-api-access-xbl99\") pod \"migrator-59844c95c7-r9ljc\" (UID: \"aeb6b18b-cc94-4a8a-a970-e39447726765\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426824 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12926866-b4b6-4c68-b010-e30359824005-service-ca-bundle\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426849 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ad45b560-df33-48b7-ad41-f2b562b6b682-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jv27r\" (UID: \"ad45b560-df33-48b7-ad41-f2b562b6b682\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426887 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426912 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a87004d0-0618-4970-93d8-14326161f16a-serving-cert\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426936 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426963 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-dir\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.426986 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a87004d0-0618-4970-93d8-14326161f16a-config\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427034 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-images\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427057 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14771d68-d459-4b1c-88a4-4a83a3141db4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wqg68\" (UID: \"14771d68-d459-4b1c-88a4-4a83a3141db4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427094 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrsrj\" (UniqueName: \"kubernetes.io/projected/fa9a91c2-efee-4e59-acaa-c5f236e0f857-kube-api-access-jrsrj\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427128 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427153 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427186 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdd8df50-7130-49f2-b68f-6cb7c95a45de-trusted-ca\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427197 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-config\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427213 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/484fd929-7fa0-4ab0-95d0-fbbb122c255a-etcd-service-ca\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427237 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14771d68-d459-4b1c-88a4-4a83a3141db4-config\") pod \"kube-controller-manager-operator-78b949d7b-wqg68\" (UID: \"14771d68-d459-4b1c-88a4-4a83a3141db4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427259 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-config-volume\") pod \"collect-profiles-29482560-xgsrm\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427284 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/62c1a236-f69b-401a-9340-c57b301f0657-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-z8d7g\" (UID: \"62c1a236-f69b-401a-9340-c57b301f0657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427700 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484fd929-7fa0-4ab0-95d0-fbbb122c255a-config\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.427056 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/484fd929-7fa0-4ab0-95d0-fbbb122c255a-etcd-ca\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.428220 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.428363 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9676863-bb22-4886-bed1-f22b6aa37f90-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4hj68\" (UID: \"d9676863-bb22-4886-bed1-f22b6aa37f90\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.428534 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-client-ca\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.428705 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-dir\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.429166 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/484fd929-7fa0-4ab0-95d0-fbbb122c255a-etcd-service-ca\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.429389 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdd8df50-7130-49f2-b68f-6cb7c95a45de-trusted-ca\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.430354 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/62c1a236-f69b-401a-9340-c57b301f0657-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-z8d7g\" (UID: \"62c1a236-f69b-401a-9340-c57b301f0657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.431152 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.431612 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdd8df50-7130-49f2-b68f-6cb7c95a45de-metrics-tls\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.431741 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ad45b560-df33-48b7-ad41-f2b562b6b682-srv-cert\") pod \"olm-operator-6b444d44fb-jv27r\" (UID: \"ad45b560-df33-48b7-ad41-f2b562b6b682\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.431950 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ad45b560-df33-48b7-ad41-f2b562b6b682-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jv27r\" (UID: \"ad45b560-df33-48b7-ad41-f2b562b6b682\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.432343 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/484fd929-7fa0-4ab0-95d0-fbbb122c255a-serving-cert\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.432662 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8837b547-757d-45f5-b85b-d800484f3d07-profile-collector-cert\") pod \"catalog-operator-68c6474976-6hltn\" (UID: \"8837b547-757d-45f5-b85b-d800484f3d07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.433373 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/484fd929-7fa0-4ab0-95d0-fbbb122c255a-etcd-client\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.433488 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/98561363-dd26-4b91-86a3-61f13886ce2c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5jfsf\" (UID: \"98561363-dd26-4b91-86a3-61f13886ce2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.433591 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-secret-volume\") pod \"collect-profiles-29482560-xgsrm\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.435842 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8837b547-757d-45f5-b85b-d800484f3d07-srv-cert\") pod \"catalog-operator-68c6474976-6hltn\" (UID: \"8837b547-757d-45f5-b85b-d800484f3d07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.438722 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c5454c-78f0-4b3e-a5d8-50319b08070d-serving-cert\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.441051 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.461013 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.480927 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.500638 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.511220 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.521499 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.530980 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.540454 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.549632 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.561476 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.569091 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d659819a-df3b-470c-822e-45864643cffa-webhook-cert\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.569708 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d659819a-df3b-470c-822e-45864643cffa-apiservice-cert\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.581159 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.600075 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.613619 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/12926866-b4b6-4c68-b010-e30359824005-default-certificate\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.620373 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.640911 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.650372 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12926866-b4b6-4c68-b010-e30359824005-service-ca-bundle\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.662337 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.673655 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/12926866-b4b6-4c68-b010-e30359824005-stats-auth\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.681632 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.692215 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12926866-b4b6-4c68-b010-e30359824005-metrics-certs\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.701737 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.741520 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.749591 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.760854 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.770683 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.781017 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.791026 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.820304 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.820970 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.831875 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.832185 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.841184 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.847251 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-policies\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.860529 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.880428 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.889737 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.910254 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.920345 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.925018 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.930485 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.940042 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.947594 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6947adf-bb59-4009-b65e-76b873d5fb18-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6qmv\" (UID: \"d6947adf-bb59-4009-b65e-76b873d5fb18\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.960477 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.981285 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 00:08:24 crc kubenswrapper[4873]: I0121 00:08:24.990751 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6947adf-bb59-4009-b65e-76b873d5fb18-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6qmv\" (UID: \"d6947adf-bb59-4009-b65e-76b873d5fb18\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.000842 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.021664 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.040650 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.061798 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.080460 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.101219 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.121423 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.161198 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.167230 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdxp4\" (UniqueName: \"kubernetes.io/projected/6d66f48c-9b12-4b28-aecf-455bc79f6ff0-kube-api-access-kdxp4\") pod \"apiserver-76f77b778f-n7fpv\" (UID: \"6d66f48c-9b12-4b28-aecf-455bc79f6ff0\") " pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.180525 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.201432 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.212011 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14771d68-d459-4b1c-88a4-4a83a3141db4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wqg68\" (UID: \"14771d68-d459-4b1c-88a4-4a83a3141db4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.218938 4873 request.go:700] Waited for 1.00326449s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0 Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.220780 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.230174 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14771d68-d459-4b1c-88a4-4a83a3141db4-config\") pod \"kube-controller-manager-operator-78b949d7b-wqg68\" (UID: \"14771d68-d459-4b1c-88a4-4a83a3141db4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.237041 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.240213 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.261940 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.281239 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.300529 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.320331 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.340168 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.360539 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.380732 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.400373 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.420386 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.421953 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n7fpv"] Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.426485 4873 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.426571 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-proxy-tls podName:ce32fab5-629a-4cb5-b5a7-c91b113ae67d nodeName:}" failed. No retries permitted until 2026-01-21 00:08:25.92653418 +0000 UTC m=+138.166401826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-proxy-tls") pod "machine-config-operator-74547568cd-pf2qm" (UID: "ce32fab5-629a-4cb5-b5a7-c91b113ae67d") : failed to sync secret cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.426625 4873 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.426671 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632798b6-480c-42d9-a549-7b2e8a87b1e2-webhook-certs podName:632798b6-480c-42d9-a549-7b2e8a87b1e2 nodeName:}" failed. No retries permitted until 2026-01-21 00:08:25.926657444 +0000 UTC m=+138.166525090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/632798b6-480c-42d9-a549-7b2e8a87b1e2-webhook-certs") pod "multus-admission-controller-857f4d67dd-mx25s" (UID: "632798b6-480c-42d9-a549-7b2e8a87b1e2") : failed to sync secret cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.427732 4873 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.427802 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9676863-bb22-4886-bed1-f22b6aa37f90-proxy-tls podName:d9676863-bb22-4886-bed1-f22b6aa37f90 nodeName:}" failed. No retries permitted until 2026-01-21 00:08:25.927780917 +0000 UTC m=+138.167648563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d9676863-bb22-4886-bed1-f22b6aa37f90-proxy-tls") pod "machine-config-controller-84d6567774-4hj68" (UID: "d9676863-bb22-4886-bed1-f22b6aa37f90") : failed to sync secret cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.428726 4873 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.428794 4873 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.428819 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a87004d0-0618-4970-93d8-14326161f16a-config podName:a87004d0-0618-4970-93d8-14326161f16a nodeName:}" failed. No retries permitted until 2026-01-21 00:08:25.928792017 +0000 UTC m=+138.168659683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a87004d0-0618-4970-93d8-14326161f16a-config") pod "service-ca-operator-777779d784-p2cz2" (UID: "a87004d0-0618-4970-93d8-14326161f16a") : failed to sync configmap cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.428838 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-images podName:ce32fab5-629a-4cb5-b5a7-c91b113ae67d nodeName:}" failed. No retries permitted until 2026-01-21 00:08:25.928830168 +0000 UTC m=+138.168697824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-images") pod "machine-config-operator-74547568cd-pf2qm" (UID: "ce32fab5-629a-4cb5-b5a7-c91b113ae67d") : failed to sync configmap cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.428883 4873 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.428907 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a87004d0-0618-4970-93d8-14326161f16a-serving-cert podName:a87004d0-0618-4970-93d8-14326161f16a nodeName:}" failed. No retries permitted until 2026-01-21 00:08:25.92889921 +0000 UTC m=+138.168766856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a87004d0-0618-4970-93d8-14326161f16a-serving-cert") pod "service-ca-operator-777779d784-p2cz2" (UID: "a87004d0-0618-4970-93d8-14326161f16a") : failed to sync secret cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.428941 4873 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: E0121 00:08:25.428961 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-config-volume podName:20b87797-f91d-4f1a-b0d7-febdafa8e7ba nodeName:}" failed. No retries permitted until 2026-01-21 00:08:25.928955692 +0000 UTC m=+138.168823328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-config-volume") pod "collect-profiles-29482560-xgsrm" (UID: "20b87797-f91d-4f1a-b0d7-febdafa8e7ba") : failed to sync configmap cache: timed out waiting for the condition Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.440765 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.460278 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.516406 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.517667 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.519647 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.540191 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.560619 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.581716 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.600971 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.624286 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.640290 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.660765 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.681467 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.701005 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.720711 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.740763 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.761659 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.781282 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.800854 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.801047 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" event={"ID":"6d66f48c-9b12-4b28-aecf-455bc79f6ff0","Type":"ContainerStarted","Data":"491bad87a353c02aec7432b0d29b4696fb9377fafd55a8ae937297c18c2fa5e1"} Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.820601 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.846852 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.859934 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.880591 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.900220 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.941180 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.954964 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632798b6-480c-42d9-a549-7b2e8a87b1e2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mx25s\" (UID: \"632798b6-480c-42d9-a549-7b2e8a87b1e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.955014 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-proxy-tls\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.955126 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9676863-bb22-4886-bed1-f22b6aa37f90-proxy-tls\") pod \"machine-config-controller-84d6567774-4hj68\" (UID: \"d9676863-bb22-4886-bed1-f22b6aa37f90\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.955207 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a87004d0-0618-4970-93d8-14326161f16a-serving-cert\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.955232 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a87004d0-0618-4970-93d8-14326161f16a-config\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.955253 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-images\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.955298 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-config-volume\") pod \"collect-profiles-29482560-xgsrm\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.956748 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-images\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.957242 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-config-volume\") pod \"collect-profiles-29482560-xgsrm\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.961009 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.961084 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-proxy-tls\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.961667 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9676863-bb22-4886-bed1-f22b6aa37f90-proxy-tls\") pod \"machine-config-controller-84d6567774-4hj68\" (UID: \"d9676863-bb22-4886-bed1-f22b6aa37f90\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.962353 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632798b6-480c-42d9-a549-7b2e8a87b1e2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mx25s\" (UID: \"632798b6-480c-42d9-a549-7b2e8a87b1e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" Jan 21 00:08:25 crc kubenswrapper[4873]: I0121 00:08:25.980502 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.000444 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.020540 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.040435 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.060462 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.098974 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml86k\" (UniqueName: \"kubernetes.io/projected/4358e352-a966-4c5c-9f6e-fb7f11616026-kube-api-access-ml86k\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.117175 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt685\" (UniqueName: \"kubernetes.io/projected/efa5d9f4-85b1-4b35-9e6c-9c463462104f-kube-api-access-kt685\") pod \"route-controller-manager-6576b87f9c-nrngb\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.138848 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk4zm\" (UniqueName: \"kubernetes.io/projected/0a5e6054-8bdc-4431-914c-ba885604a20b-kube-api-access-qk4zm\") pod \"openshift-config-operator-7777fb866f-ch9hg\" (UID: \"0a5e6054-8bdc-4431-914c-ba885604a20b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.158261 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dn7t\" (UniqueName: \"kubernetes.io/projected/9250698c-3404-4a66-a9b6-286266f0e829-kube-api-access-2dn7t\") pod \"machine-api-operator-5694c8668f-7hk8w\" (UID: \"9250698c-3404-4a66-a9b6-286266f0e829\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.177698 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.178386 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msxbr\" (UniqueName: \"kubernetes.io/projected/1d07d6a1-689c-4fb5-af20-c2ee5f54273f-kube-api-access-msxbr\") pod \"dns-operator-744455d44c-sqq6z\" (UID: \"1d07d6a1-689c-4fb5-af20-c2ee5f54273f\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.198783 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbbjr\" (UniqueName: \"kubernetes.io/projected/2b3d8bc1-b5b3-4ab9-a585-4a690d57f630-kube-api-access-xbbjr\") pod \"authentication-operator-69f744f599-lh79l\" (UID: \"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.213829 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.219133 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hwq7\" (UniqueName: \"kubernetes.io/projected/110863cb-5af1-4e46-9e40-9831dfa25875-kube-api-access-6hwq7\") pod \"downloads-7954f5f757-9gjlf\" (UID: \"110863cb-5af1-4e46-9e40-9831dfa25875\") " pod="openshift-console/downloads-7954f5f757-9gjlf" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.219269 4873 request.go:700] Waited for 1.900315322s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.248030 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lflq2\" (UniqueName: \"kubernetes.io/projected/5320d128-0e99-479c-bb4e-edf54938ea82-kube-api-access-lflq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-ls7nr\" (UID: \"5320d128-0e99-479c-bb4e-edf54938ea82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.271174 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c999d\" (UniqueName: \"kubernetes.io/projected/4e56ba3a-006a-4ae3-ae87-5c9babf78867-kube-api-access-c999d\") pod \"console-f9d7485db-zffj8\" (UID: \"4e56ba3a-006a-4ae3-ae87-5c9babf78867\") " pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.280418 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgv4r\" (UniqueName: \"kubernetes.io/projected/4bdecaf8-534a-44fb-ad7a-224aacfc573b-kube-api-access-mgv4r\") pod \"console-operator-58897d9998-r2mwh\" (UID: \"4bdecaf8-534a-44fb-ad7a-224aacfc573b\") " pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.304863 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.306505 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9jjz\" (UniqueName: \"kubernetes.io/projected/1bbf75ab-d6db-4c51-a850-bb383d55d2c3-kube-api-access-z9jjz\") pod \"openshift-apiserver-operator-796bbdcf4f-xgml2\" (UID: \"1bbf75ab-d6db-4c51-a850-bb383d55d2c3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.319227 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6krt\" (UniqueName: \"kubernetes.io/projected/8e08f95c-603b-4967-a545-b4cc31eeca6d-kube-api-access-r6krt\") pod \"image-pruner-29482560-5mcvl\" (UID: \"8e08f95c-603b-4967-a545-b4cc31eeca6d\") " pod="openshift-image-registry/image-pruner-29482560-5mcvl" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.335936 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr684\" (UniqueName: \"kubernetes.io/projected/57f81905-c3d1-4eeb-83b7-4d25d73f77e5-kube-api-access-lr684\") pod \"machine-approver-56656f9798-cznck\" (UID: \"57f81905-c3d1-4eeb-83b7-4d25d73f77e5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.341015 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9gjlf" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.347481 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.356528 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm658\" (UniqueName: \"kubernetes.io/projected/8afe39b0-54e9-4f29-98a3-52ec15ffcb6f-kube-api-access-sm658\") pod \"cluster-samples-operator-665b6dd947-m8l49\" (UID: \"8afe39b0-54e9-4f29-98a3-52ec15ffcb6f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.361801 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.368969 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.374317 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4358e352-a966-4c5c-9f6e-fb7f11616026-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7kfzs\" (UID: \"4358e352-a966-4c5c-9f6e-fb7f11616026\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.379471 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.381944 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.394540 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.395186 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7hk8w"] Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.401383 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.402465 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.420938 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.441602 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.443431 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lh79l"] Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.447794 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a87004d0-0618-4970-93d8-14326161f16a-config\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.450996 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a87004d0-0618-4970-93d8-14326161f16a-serving-cert\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:26 crc kubenswrapper[4873]: W0121 00:08:26.452445 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9250698c_3404_4a66_a9b6_286266f0e829.slice/crio-42ef10b2b608a770181e24f415ece533c142638a23786fde2920b9649c5c2b06 WatchSource:0}: Error finding container 42ef10b2b608a770181e24f415ece533c142638a23786fde2920b9649c5c2b06: Status 404 returned error can't find the container with id 42ef10b2b608a770181e24f415ece533c142638a23786fde2920b9649c5c2b06 Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.462013 4873 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 00:08:26 crc kubenswrapper[4873]: W0121 00:08:26.472077 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b3d8bc1_b5b3_4ab9_a585_4a690d57f630.slice/crio-3721e21bdb2739676a39471a609bb441b439aaf96d05a1b71f3ee504438ed03c WatchSource:0}: Error finding container 3721e21bdb2739676a39471a609bb441b439aaf96d05a1b71f3ee504438ed03c: Status 404 returned error can't find the container with id 3721e21bdb2739676a39471a609bb441b439aaf96d05a1b71f3ee504438ed03c Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.481047 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.524879 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwqsg\" (UniqueName: \"kubernetes.io/projected/fdd8df50-7130-49f2-b68f-6cb7c95a45de-kube-api-access-cwqsg\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.541390 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.544195 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftvzt\" (UniqueName: \"kubernetes.io/projected/d6947adf-bb59-4009-b65e-76b873d5fb18-kube-api-access-ftvzt\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6qmv\" (UID: \"d6947adf-bb59-4009-b65e-76b873d5fb18\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.569028 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29482560-5mcvl" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.591119 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fdd8df50-7130-49f2-b68f-6cb7c95a45de-bound-sa-token\") pod \"ingress-operator-5b745b69d9-prftc\" (UID: \"fdd8df50-7130-49f2-b68f-6cb7c95a45de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.602227 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9gjlf"] Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.604150 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lpv7\" (UniqueName: \"kubernetes.io/projected/8837b547-757d-45f5-b85b-d800484f3d07-kube-api-access-7lpv7\") pod \"catalog-operator-68c6474976-6hltn\" (UID: \"8837b547-757d-45f5-b85b-d800484f3d07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:26 crc kubenswrapper[4873]: W0121 00:08:26.614787 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod110863cb_5af1_4e46_9e40_9831dfa25875.slice/crio-b803d500a23904419d7d84ec13622808bc2e5c89aaf7ac8540f8cb610aabff1d WatchSource:0}: Error finding container b803d500a23904419d7d84ec13622808bc2e5c89aaf7ac8540f8cb610aabff1d: Status 404 returned error can't find the container with id b803d500a23904419d7d84ec13622808bc2e5c89aaf7ac8540f8cb610aabff1d Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.615542 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.622299 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7hn\" (UniqueName: \"kubernetes.io/projected/632798b6-480c-42d9-a549-7b2e8a87b1e2-kube-api-access-hw7hn\") pod \"multus-admission-controller-857f4d67dd-mx25s\" (UID: \"632798b6-480c-42d9-a549-7b2e8a87b1e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.629613 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thkc9\" (UniqueName: \"kubernetes.io/projected/62c1a236-f69b-401a-9340-c57b301f0657-kube-api-access-thkc9\") pod \"control-plane-machine-set-operator-78cbb6b69f-z8d7g\" (UID: \"62c1a236-f69b-401a-9340-c57b301f0657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.648152 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14771d68-d459-4b1c-88a4-4a83a3141db4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wqg68\" (UID: \"14771d68-d459-4b1c-88a4-4a83a3141db4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.652757 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb"] Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.654497 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.680811 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxsnq\" (UniqueName: \"kubernetes.io/projected/d659819a-df3b-470c-822e-45864643cffa-kube-api-access-hxsnq\") pod \"packageserver-d55dfcdfc-tvd8h\" (UID: \"d659819a-df3b-470c-822e-45864643cffa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.684336 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94wwf\" (UniqueName: \"kubernetes.io/projected/d8c5454c-78f0-4b3e-a5d8-50319b08070d-kube-api-access-94wwf\") pod \"controller-manager-879f6c89f-7tbgp\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.706342 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr9qf\" (UniqueName: \"kubernetes.io/projected/a87004d0-0618-4970-93d8-14326161f16a-kube-api-access-sr9qf\") pod \"service-ca-operator-777779d784-p2cz2\" (UID: \"a87004d0-0618-4970-93d8-14326161f16a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.709123 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.716896 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srxxt\" (UniqueName: \"kubernetes.io/projected/484fd929-7fa0-4ab0-95d0-fbbb122c255a-kube-api-access-srxxt\") pod \"etcd-operator-b45778765-jbpj7\" (UID: \"484fd929-7fa0-4ab0-95d0-fbbb122c255a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.719615 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.725911 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sqq6z"] Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.730535 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-r2mwh"] Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.739149 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lglfc\" (UniqueName: \"kubernetes.io/projected/98561363-dd26-4b91-86a3-61f13886ce2c-kube-api-access-lglfc\") pod \"package-server-manager-789f6589d5-5jfsf\" (UID: \"98561363-dd26-4b91-86a3-61f13886ce2c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.764960 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.775110 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbl99\" (UniqueName: \"kubernetes.io/projected/aeb6b18b-cc94-4a8a-a970-e39447726765-kube-api-access-xbl99\") pod \"migrator-59844c95c7-r9ljc\" (UID: \"aeb6b18b-cc94-4a8a-a970-e39447726765\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.775091 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.782610 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.789064 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbkpj\" (UniqueName: \"kubernetes.io/projected/ad45b560-df33-48b7-ad41-f2b562b6b682-kube-api-access-wbkpj\") pod \"olm-operator-6b444d44fb-jv27r\" (UID: \"ad45b560-df33-48b7-ad41-f2b562b6b682\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.789349 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.811508 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.816766 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvb7h\" (UniqueName: \"kubernetes.io/projected/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-kube-api-access-kvb7h\") pod \"collect-profiles-29482560-xgsrm\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.816801 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" event={"ID":"efa5d9f4-85b1-4b35-9e6c-9c463462104f","Type":"ContainerStarted","Data":"1c540e7f3492c8441e50e02df15c5f808f5a2208a7e1705a0e7ca0130a2b861a"} Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.819655 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-r2mwh" event={"ID":"4bdecaf8-534a-44fb-ad7a-224aacfc573b","Type":"ContainerStarted","Data":"dd76d83396a6f1c58f15dec81be33e14a2f469e1fd6ac68633122fa551b1ba6f"} Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.824911 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntlwk\" (UniqueName: \"kubernetes.io/projected/ce32fab5-629a-4cb5-b5a7-c91b113ae67d-kube-api-access-ntlwk\") pod \"machine-config-operator-74547568cd-pf2qm\" (UID: \"ce32fab5-629a-4cb5-b5a7-c91b113ae67d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.825228 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" event={"ID":"57f81905-c3d1-4eeb-83b7-4d25d73f77e5","Type":"ContainerStarted","Data":"242d7e6b6da2aaa6745831468dfb50ecce06ac1944ff77b96e88323f830bf091"} Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.825369 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.827670 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" event={"ID":"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630","Type":"ContainerStarted","Data":"fe2ae62616be6ef70ff5a8d3971eb14d6cc18d9625dac4eb6a50c2b16c07a23b"} Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.827706 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" event={"ID":"2b3d8bc1-b5b3-4ab9-a585-4a690d57f630","Type":"ContainerStarted","Data":"3721e21bdb2739676a39471a609bb441b439aaf96d05a1b71f3ee504438ed03c"} Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.834158 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" event={"ID":"1d07d6a1-689c-4fb5-af20-c2ee5f54273f","Type":"ContainerStarted","Data":"f5437c1323675fe0eb6fbe6951389dc13ce61cb647648b8c4c52b21dbefa0ea0"} Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.836624 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9gjlf" event={"ID":"110863cb-5af1-4e46-9e40-9831dfa25875","Type":"ContainerStarted","Data":"b803d500a23904419d7d84ec13622808bc2e5c89aaf7ac8540f8cb610aabff1d"} Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.840669 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" event={"ID":"9250698c-3404-4a66-a9b6-286266f0e829","Type":"ContainerStarted","Data":"5d48c41f5c0b622e6d7248b7308bc372399f3dad3f4cf5b72968a8543bf3eba7"} Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.840719 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" event={"ID":"9250698c-3404-4a66-a9b6-286266f0e829","Type":"ContainerStarted","Data":"42ef10b2b608a770181e24f415ece533c142638a23786fde2920b9649c5c2b06"} Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.841900 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptqx4\" (UniqueName: \"kubernetes.io/projected/d9676863-bb22-4886-bed1-f22b6aa37f90-kube-api-access-ptqx4\") pod \"machine-config-controller-84d6567774-4hj68\" (UID: \"d9676863-bb22-4886-bed1-f22b6aa37f90\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.843834 4873 generic.go:334] "Generic (PLEG): container finished" podID="6d66f48c-9b12-4b28-aecf-455bc79f6ff0" containerID="3f185dc4db44aceaaae6908fdb296cb041f6e6516d4076c54d4931927ba035f5" exitCode=0 Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.843881 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" event={"ID":"6d66f48c-9b12-4b28-aecf-455bc79f6ff0","Type":"ContainerDied","Data":"3f185dc4db44aceaaae6908fdb296cb041f6e6516d4076c54d4931927ba035f5"} Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.859720 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.862904 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pnst\" (UniqueName: \"kubernetes.io/projected/12926866-b4b6-4c68-b010-e30359824005-kube-api-access-9pnst\") pod \"router-default-5444994796-s5hrf\" (UID: \"12926866-b4b6-4c68-b010-e30359824005\") " pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.873523 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.884474 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrsrj\" (UniqueName: \"kubernetes.io/projected/fa9a91c2-efee-4e59-acaa-c5f236e0f857-kube-api-access-jrsrj\") pod \"oauth-openshift-558db77b4-7d257\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.897928 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.904190 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.911521 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.919272 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.933795 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.964774 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29482560-5mcvl"] Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977602 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01bd6a42-c0f6-4193-b4d3-3366e082a486-etcd-client\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977689 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977734 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01bd6a42-c0f6-4193-b4d3-3366e082a486-audit-dir\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977761 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/530a993a-eb48-4622-abec-7f3af78b3c40-installation-pull-secrets\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977782 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-registry-tls\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977803 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f224040b-f12f-43a9-a425-f971b7f1b028-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8kf2\" (UID: \"f224040b-f12f-43a9-a425-f971b7f1b028\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977829 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01bd6a42-c0f6-4193-b4d3-3366e082a486-audit-policies\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977865 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01bd6a42-c0f6-4193-b4d3-3366e082a486-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977889 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qtj\" (UniqueName: \"kubernetes.io/projected/467315c7-9b10-4312-b8d1-25f6bf3c48c9-kube-api-access-s5qtj\") pod \"service-ca-9c57cc56f-4d6hg\" (UID: \"467315c7-9b10-4312-b8d1-25f6bf3c48c9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977909 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ee4dd9-2045-4c76-9ddd-8f0a81f148dd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6rk8p\" (UID: \"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977936 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01bd6a42-c0f6-4193-b4d3-3366e082a486-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977959 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45ee4dd9-2045-4c76-9ddd-8f0a81f148dd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6rk8p\" (UID: \"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.977998 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/467315c7-9b10-4312-b8d1-25f6bf3c48c9-signing-key\") pod \"service-ca-9c57cc56f-4d6hg\" (UID: \"467315c7-9b10-4312-b8d1-25f6bf3c48c9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978035 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bhsdr\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:26 crc kubenswrapper[4873]: E0121 00:08:26.978071 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:27.478051816 +0000 UTC m=+139.717919462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978109 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ee4dd9-2045-4c76-9ddd-8f0a81f148dd-config\") pod \"kube-apiserver-operator-766d6c64bb-6rk8p\" (UID: \"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978154 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-registry-certificates\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978230 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2hgz\" (UniqueName: \"kubernetes.io/projected/01bd6a42-c0f6-4193-b4d3-3366e082a486-kube-api-access-r2hgz\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978245 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-trusted-ca\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978260 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v426\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-kube-api-access-5v426\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978282 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-bound-sa-token\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978331 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/467315c7-9b10-4312-b8d1-25f6bf3c48c9-signing-cabundle\") pod \"service-ca-9c57cc56f-4d6hg\" (UID: \"467315c7-9b10-4312-b8d1-25f6bf3c48c9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978351 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bhsdr\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978373 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f224040b-f12f-43a9-a425-f971b7f1b028-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8kf2\" (UID: \"f224040b-f12f-43a9-a425-f971b7f1b028\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978406 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/530a993a-eb48-4622-abec-7f3af78b3c40-ca-trust-extracted\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978420 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bd6a42-c0f6-4193-b4d3-3366e082a486-serving-cert\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.978456 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcszd\" (UniqueName: \"kubernetes.io/projected/7064ffc6-970d-4592-a979-ed7fd110cbc8-kube-api-access-qcszd\") pod \"marketplace-operator-79b997595-bhsdr\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.980185 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01bd6a42-c0f6-4193-b4d3-3366e082a486-encryption-config\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.980294 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f224040b-f12f-43a9-a425-f971b7f1b028-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8kf2\" (UID: \"f224040b-f12f-43a9-a425-f971b7f1b028\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:26 crc kubenswrapper[4873]: I0121 00:08:26.980453 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zffj8"] Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.043874 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs"] Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.046656 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg"] Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.053460 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.080807 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.081113 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/467315c7-9b10-4312-b8d1-25f6bf3c48c9-signing-cabundle\") pod \"service-ca-9c57cc56f-4d6hg\" (UID: \"467315c7-9b10-4312-b8d1-25f6bf3c48c9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.081165 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bhsdr\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.081184 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f224040b-f12f-43a9-a425-f971b7f1b028-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8kf2\" (UID: \"f224040b-f12f-43a9-a425-f971b7f1b028\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.081833 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:27.58179517 +0000 UTC m=+139.821662976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.084474 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/467315c7-9b10-4312-b8d1-25f6bf3c48c9-signing-cabundle\") pod \"service-ca-9c57cc56f-4d6hg\" (UID: \"467315c7-9b10-4312-b8d1-25f6bf3c48c9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.084644 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5glz8\" (UniqueName: \"kubernetes.io/projected/b60f95eb-22b2-47e1-8c88-71ff19f08465-kube-api-access-5glz8\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.084804 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/530a993a-eb48-4622-abec-7f3af78b3c40-ca-trust-extracted\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.084887 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bd6a42-c0f6-4193-b4d3-3366e082a486-serving-cert\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.084971 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-registration-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.085118 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcszd\" (UniqueName: \"kubernetes.io/projected/7064ffc6-970d-4592-a979-ed7fd110cbc8-kube-api-access-qcszd\") pod \"marketplace-operator-79b997595-bhsdr\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.085371 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9kd7\" (UniqueName: \"kubernetes.io/projected/9fcbbed6-6e52-4be7-92bc-9abc19663dc6-kube-api-access-n9kd7\") pod \"machine-config-server-qqg7r\" (UID: \"9fcbbed6-6e52-4be7-92bc-9abc19663dc6\") " pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.085442 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b58f6ba2-2455-47b0-82db-1b79fe74520e-cert\") pod \"ingress-canary-9d5lf\" (UID: \"b58f6ba2-2455-47b0-82db-1b79fe74520e\") " pod="openshift-ingress-canary/ingress-canary-9d5lf" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.085507 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01bd6a42-c0f6-4193-b4d3-3366e082a486-encryption-config\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.085592 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2ae5418-a2f2-41d8-b834-ac62b21dc1e5-config-volume\") pod \"dns-default-q2stp\" (UID: \"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5\") " pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.085662 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f224040b-f12f-43a9-a425-f971b7f1b028-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8kf2\" (UID: \"f224040b-f12f-43a9-a425-f971b7f1b028\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.085687 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-mountpoint-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.085752 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01bd6a42-c0f6-4193-b4d3-3366e082a486-etcd-client\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.085802 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.086535 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/530a993a-eb48-4622-abec-7f3af78b3c40-ca-trust-extracted\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.088343 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01bd6a42-c0f6-4193-b4d3-3366e082a486-audit-dir\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.088478 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/530a993a-eb48-4622-abec-7f3af78b3c40-installation-pull-secrets\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.088604 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvplj\" (UniqueName: \"kubernetes.io/projected/e2ae5418-a2f2-41d8-b834-ac62b21dc1e5-kube-api-access-cvplj\") pod \"dns-default-q2stp\" (UID: \"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5\") " pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.088691 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-registry-tls\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.088758 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f224040b-f12f-43a9-a425-f971b7f1b028-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8kf2\" (UID: \"f224040b-f12f-43a9-a425-f971b7f1b028\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.089100 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:27.589085456 +0000 UTC m=+139.828953102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.094491 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01bd6a42-c0f6-4193-b4d3-3366e082a486-audit-dir\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.095010 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01bd6a42-c0f6-4193-b4d3-3366e082a486-audit-policies\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.095095 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2ae5418-a2f2-41d8-b834-ac62b21dc1e5-metrics-tls\") pod \"dns-default-q2stp\" (UID: \"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5\") " pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.095866 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01bd6a42-c0f6-4193-b4d3-3366e082a486-audit-policies\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.096008 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01bd6a42-c0f6-4193-b4d3-3366e082a486-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.096193 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f224040b-f12f-43a9-a425-f971b7f1b028-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8kf2\" (UID: \"f224040b-f12f-43a9-a425-f971b7f1b028\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.096278 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5qtj\" (UniqueName: \"kubernetes.io/projected/467315c7-9b10-4312-b8d1-25f6bf3c48c9-kube-api-access-s5qtj\") pod \"service-ca-9c57cc56f-4d6hg\" (UID: \"467315c7-9b10-4312-b8d1-25f6bf3c48c9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.096415 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ee4dd9-2045-4c76-9ddd-8f0a81f148dd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6rk8p\" (UID: \"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.096488 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxpw7\" (UniqueName: \"kubernetes.io/projected/b58f6ba2-2455-47b0-82db-1b79fe74520e-kube-api-access-dxpw7\") pod \"ingress-canary-9d5lf\" (UID: \"b58f6ba2-2455-47b0-82db-1b79fe74520e\") " pod="openshift-ingress-canary/ingress-canary-9d5lf" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.096643 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01bd6a42-c0f6-4193-b4d3-3366e082a486-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.096761 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01bd6a42-c0f6-4193-b4d3-3366e082a486-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.096860 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45ee4dd9-2045-4c76-9ddd-8f0a81f148dd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6rk8p\" (UID: \"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.097538 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01bd6a42-c0f6-4193-b4d3-3366e082a486-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.097952 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/467315c7-9b10-4312-b8d1-25f6bf3c48c9-signing-key\") pod \"service-ca-9c57cc56f-4d6hg\" (UID: \"467315c7-9b10-4312-b8d1-25f6bf3c48c9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.100967 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bhsdr\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.101008 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-csi-data-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.101885 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-registry-tls\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.102376 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bhsdr\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.102847 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ee4dd9-2045-4c76-9ddd-8f0a81f148dd-config\") pod \"kube-apiserver-operator-766d6c64bb-6rk8p\" (UID: \"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.102893 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-socket-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.106946 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ee4dd9-2045-4c76-9ddd-8f0a81f148dd-config\") pod \"kube-apiserver-operator-766d6c64bb-6rk8p\" (UID: \"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.107174 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bhsdr\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.107755 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-plugins-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.108298 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-registry-certificates\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.108521 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/530a993a-eb48-4622-abec-7f3af78b3c40-installation-pull-secrets\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.108669 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f224040b-f12f-43a9-a425-f971b7f1b028-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8kf2\" (UID: \"f224040b-f12f-43a9-a425-f971b7f1b028\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.109154 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bd6a42-c0f6-4193-b4d3-3366e082a486-serving-cert\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.109568 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-registry-certificates\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.110145 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2hgz\" (UniqueName: \"kubernetes.io/projected/01bd6a42-c0f6-4193-b4d3-3366e082a486-kube-api-access-r2hgz\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.110267 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-trusted-ca\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.110291 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v426\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-kube-api-access-5v426\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.110374 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fcbbed6-6e52-4be7-92bc-9abc19663dc6-certs\") pod \"machine-config-server-qqg7r\" (UID: \"9fcbbed6-6e52-4be7-92bc-9abc19663dc6\") " pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.110441 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-bound-sa-token\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.111300 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.127769 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01bd6a42-c0f6-4193-b4d3-3366e082a486-etcd-client\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.131044 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-trusted-ca\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.132020 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01bd6a42-c0f6-4193-b4d3-3366e082a486-encryption-config\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.132385 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ee4dd9-2045-4c76-9ddd-8f0a81f148dd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6rk8p\" (UID: \"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.132597 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fcbbed6-6e52-4be7-92bc-9abc19663dc6-node-bootstrap-token\") pod \"machine-config-server-qqg7r\" (UID: \"9fcbbed6-6e52-4be7-92bc-9abc19663dc6\") " pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.134467 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.155140 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcszd\" (UniqueName: \"kubernetes.io/projected/7064ffc6-970d-4592-a979-ed7fd110cbc8-kube-api-access-qcszd\") pod \"marketplace-operator-79b997595-bhsdr\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.159115 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/467315c7-9b10-4312-b8d1-25f6bf3c48c9-signing-key\") pod \"service-ca-9c57cc56f-4d6hg\" (UID: \"467315c7-9b10-4312-b8d1-25f6bf3c48c9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.167874 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f224040b-f12f-43a9-a425-f971b7f1b028-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8kf2\" (UID: \"f224040b-f12f-43a9-a425-f971b7f1b028\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.186665 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.227089 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr"] Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.234510 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235031 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5glz8\" (UniqueName: \"kubernetes.io/projected/b60f95eb-22b2-47e1-8c88-71ff19f08465-kube-api-access-5glz8\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235094 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-registration-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235158 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9kd7\" (UniqueName: \"kubernetes.io/projected/9fcbbed6-6e52-4be7-92bc-9abc19663dc6-kube-api-access-n9kd7\") pod \"machine-config-server-qqg7r\" (UID: \"9fcbbed6-6e52-4be7-92bc-9abc19663dc6\") " pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235187 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b58f6ba2-2455-47b0-82db-1b79fe74520e-cert\") pod \"ingress-canary-9d5lf\" (UID: \"b58f6ba2-2455-47b0-82db-1b79fe74520e\") " pod="openshift-ingress-canary/ingress-canary-9d5lf" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235224 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2ae5418-a2f2-41d8-b834-ac62b21dc1e5-config-volume\") pod \"dns-default-q2stp\" (UID: \"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5\") " pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235255 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-mountpoint-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235308 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvplj\" (UniqueName: \"kubernetes.io/projected/e2ae5418-a2f2-41d8-b834-ac62b21dc1e5-kube-api-access-cvplj\") pod \"dns-default-q2stp\" (UID: \"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5\") " pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235338 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2ae5418-a2f2-41d8-b834-ac62b21dc1e5-metrics-tls\") pod \"dns-default-q2stp\" (UID: \"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5\") " pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235387 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxpw7\" (UniqueName: \"kubernetes.io/projected/b58f6ba2-2455-47b0-82db-1b79fe74520e-kube-api-access-dxpw7\") pod \"ingress-canary-9d5lf\" (UID: \"b58f6ba2-2455-47b0-82db-1b79fe74520e\") " pod="openshift-ingress-canary/ingress-canary-9d5lf" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235451 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-csi-data-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235483 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-socket-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235514 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-plugins-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235608 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fcbbed6-6e52-4be7-92bc-9abc19663dc6-certs\") pod \"machine-config-server-qqg7r\" (UID: \"9fcbbed6-6e52-4be7-92bc-9abc19663dc6\") " pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.235648 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fcbbed6-6e52-4be7-92bc-9abc19663dc6-node-bootstrap-token\") pod \"machine-config-server-qqg7r\" (UID: \"9fcbbed6-6e52-4be7-92bc-9abc19663dc6\") " pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.236536 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2ae5418-a2f2-41d8-b834-ac62b21dc1e5-config-volume\") pod \"dns-default-q2stp\" (UID: \"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5\") " pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.236587 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-registration-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.236727 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:27.736703071 +0000 UTC m=+139.976570717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.236773 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-mountpoint-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.236945 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-socket-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.237058 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-csi-data-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.237097 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b60f95eb-22b2-47e1-8c88-71ff19f08465-plugins-dir\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.247298 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45ee4dd9-2045-4c76-9ddd-8f0a81f148dd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6rk8p\" (UID: \"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.255872 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2"] Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.256787 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5qtj\" (UniqueName: \"kubernetes.io/projected/467315c7-9b10-4312-b8d1-25f6bf3c48c9-kube-api-access-s5qtj\") pod \"service-ca-9c57cc56f-4d6hg\" (UID: \"467315c7-9b10-4312-b8d1-25f6bf3c48c9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.263173 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2hgz\" (UniqueName: \"kubernetes.io/projected/01bd6a42-c0f6-4193-b4d3-3366e082a486-kube-api-access-r2hgz\") pod \"apiserver-7bbb656c7d-wf68l\" (UID: \"01bd6a42-c0f6-4193-b4d3-3366e082a486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.263219 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fcbbed6-6e52-4be7-92bc-9abc19663dc6-certs\") pod \"machine-config-server-qqg7r\" (UID: \"9fcbbed6-6e52-4be7-92bc-9abc19663dc6\") " pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.263515 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2ae5418-a2f2-41d8-b834-ac62b21dc1e5-metrics-tls\") pod \"dns-default-q2stp\" (UID: \"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5\") " pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.263523 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fcbbed6-6e52-4be7-92bc-9abc19663dc6-node-bootstrap-token\") pod \"machine-config-server-qqg7r\" (UID: \"9fcbbed6-6e52-4be7-92bc-9abc19663dc6\") " pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.263748 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b58f6ba2-2455-47b0-82db-1b79fe74520e-cert\") pod \"ingress-canary-9d5lf\" (UID: \"b58f6ba2-2455-47b0-82db-1b79fe74520e\") " pod="openshift-ingress-canary/ingress-canary-9d5lf" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.268514 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v426\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-kube-api-access-5v426\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.283295 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-bound-sa-token\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.307587 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxpw7\" (UniqueName: \"kubernetes.io/projected/b58f6ba2-2455-47b0-82db-1b79fe74520e-kube-api-access-dxpw7\") pod \"ingress-canary-9d5lf\" (UID: \"b58f6ba2-2455-47b0-82db-1b79fe74520e\") " pod="openshift-ingress-canary/ingress-canary-9d5lf" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.319233 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9kd7\" (UniqueName: \"kubernetes.io/projected/9fcbbed6-6e52-4be7-92bc-9abc19663dc6-kube-api-access-n9kd7\") pod \"machine-config-server-qqg7r\" (UID: \"9fcbbed6-6e52-4be7-92bc-9abc19663dc6\") " pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.337383 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.338152 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:27.838123006 +0000 UTC m=+140.077990652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.339628 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5glz8\" (UniqueName: \"kubernetes.io/projected/b60f95eb-22b2-47e1-8c88-71ff19f08465-kube-api-access-5glz8\") pod \"csi-hostpathplugin-72s9c\" (UID: \"b60f95eb-22b2-47e1-8c88-71ff19f08465\") " pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.356050 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvplj\" (UniqueName: \"kubernetes.io/projected/e2ae5418-a2f2-41d8-b834-ac62b21dc1e5-kube-api-access-cvplj\") pod \"dns-default-q2stp\" (UID: \"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5\") " pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.370028 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.445236 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.445609 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:27.945588951 +0000 UTC m=+140.185456597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.453785 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.466688 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.489869 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.537818 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qqg7r" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.542587 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9d5lf" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.548931 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.549300 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.049288264 +0000 UTC m=+140.289155910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.550746 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.555599 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-prftc"] Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.566573 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-72s9c" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.567944 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf"] Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.650133 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.650705 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.150687938 +0000 UTC m=+140.390555584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.753596 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.754177 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.254144314 +0000 UTC m=+140.494011950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.765277 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49"] Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.861772 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.862277 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.362239468 +0000 UTC m=+140.602107114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.862483 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.873335 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" event={"ID":"efa5d9f4-85b1-4b35-9e6c-9c463462104f","Type":"ContainerStarted","Data":"cac87290b7ca47aea50cdc746fc38a9ea690748574c758f606a6826170d4cbdd"} Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.874516 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.37448685 +0000 UTC m=+140.614354496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.875113 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.879384 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" event={"ID":"1bbf75ab-d6db-4c51-a850-bb383d55d2c3","Type":"ContainerStarted","Data":"17d1023b2d313eda613ac21c57ce10ef793a6d1b0339db1f7ecd8625ad6bf78f"} Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.881751 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-r2mwh" event={"ID":"4bdecaf8-534a-44fb-ad7a-224aacfc573b","Type":"ContainerStarted","Data":"0ba7c4fd7bfbb7ac72ab4473dc1881b6e8d43a108f6d97fe10db69c12c6ce95a"} Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.887394 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.931381 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" podStartSLOduration=119.931364706 podStartE2EDuration="1m59.931364706s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:27.927645315 +0000 UTC m=+140.167512961" watchObservedRunningTime="2026-01-21 00:08:27.931364706 +0000 UTC m=+140.171232352" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.931808 4873 patch_prober.go:28] interesting pod/console-operator-58897d9998-r2mwh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.931838 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-r2mwh" podUID="4bdecaf8-534a-44fb-ad7a-224aacfc573b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.964043 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:27 crc kubenswrapper[4873]: E0121 00:08:27.965873 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.465830708 +0000 UTC m=+140.705698354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.979461 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9gjlf" event={"ID":"110863cb-5af1-4e46-9e40-9831dfa25875","Type":"ContainerStarted","Data":"3d4d86686497e08fa25b0d1e053f63504abf0a3c40702feb70bd10c77c065d82"} Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.979987 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9gjlf" Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.981210 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" event={"ID":"fdd8df50-7130-49f2-b68f-6cb7c95a45de","Type":"ContainerStarted","Data":"99993ae3dce52b37763df6a072eea515d0ba8ccdf2e4adb523ed12984ee5ccb0"} Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.981942 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" event={"ID":"4358e352-a966-4c5c-9f6e-fb7f11616026","Type":"ContainerStarted","Data":"490fd0a7b0374d48d9bac2d0c936a1a3d955fc3650e1f6e7a90d47969e95fa4c"} Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.982922 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" event={"ID":"0a5e6054-8bdc-4431-914c-ba885604a20b","Type":"ContainerStarted","Data":"cdfdd1321e01fe42e24e6c5dd092cf8675e7890c9dc1445939d573a1b7782ba3"} Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.983936 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-s5hrf" event={"ID":"12926866-b4b6-4c68-b010-e30359824005","Type":"ContainerStarted","Data":"2dc2eb8e5114f988f740b268d2ebcef9248c5df19f61970654030241d2592057"} Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.985329 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" event={"ID":"57f81905-c3d1-4eeb-83b7-4d25d73f77e5","Type":"ContainerStarted","Data":"091090aeccc02f3ad6696ed10534d5c6281f02bcc95eb90e05d7b1380aabd955"} Jan 21 00:08:27 crc kubenswrapper[4873]: I0121 00:08:27.989119 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29482560-5mcvl" event={"ID":"8e08f95c-603b-4967-a545-b4cc31eeca6d","Type":"ContainerStarted","Data":"90e1a61c3bd9426265ff7c2d7ebf3e2adf4d4fcef86259ea629f913267e7eee4"} Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.000316 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gjlf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.000367 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9gjlf" podUID="110863cb-5af1-4e46-9e40-9831dfa25875" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.006466 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hk8w" event={"ID":"9250698c-3404-4a66-a9b6-286266f0e829","Type":"ContainerStarted","Data":"4879bd3f3ceb63ab3f8a77310a50eff5c965bf0397213e3e35c95a76f4e3eed6"} Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.013538 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" event={"ID":"5320d128-0e99-479c-bb4e-edf54938ea82","Type":"ContainerStarted","Data":"c5a3a4a488bdaca19ef99ba616f0fb8b71297703a9aed44b5e5500463f0259d4"} Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.016182 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" event={"ID":"98561363-dd26-4b91-86a3-61f13886ce2c","Type":"ContainerStarted","Data":"15516971e004458465f147436457c2b7adf929ebb4f69b231ebd0b01c5e6c64f"} Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.017957 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zffj8" event={"ID":"4e56ba3a-006a-4ae3-ae87-5c9babf78867","Type":"ContainerStarted","Data":"d6918a43aa30d053cc0b7f36393de9ea0f8215d0190b2625919dc88fe2cf3e8a"} Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.069048 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.070828 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.570808317 +0000 UTC m=+140.810675963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.092912 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7tbgp"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.092949 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.114123 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.114214 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.121052 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jbpj7"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.138805 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.138847 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.166647 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.170094 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.171009 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.670984507 +0000 UTC m=+140.910852153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: W0121 00:08:28.175410 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8837b547_757d_45f5_b85b_d800484f3d07.slice/crio-3eb726995a7a898b49cffcb59720623279f7fbaacac1e713181909b32bd1c890 WatchSource:0}: Error finding container 3eb726995a7a898b49cffcb59720623279f7fbaacac1e713181909b32bd1c890: Status 404 returned error can't find the container with id 3eb726995a7a898b49cffcb59720623279f7fbaacac1e713181909b32bd1c890 Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.273399 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.273920 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.773902547 +0000 UTC m=+141.013770193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.376897 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.377333 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.87729807 +0000 UTC m=+141.117165716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.377516 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.379630 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.879613849 +0000 UTC m=+141.119481505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.482067 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.487757 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.98763994 +0000 UTC m=+141.227507596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.495399 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.495833 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:28.995813472 +0000 UTC m=+141.235681118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.516212 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.551505 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.553137 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.555779 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.562878 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc"] Jan 21 00:08:28 crc kubenswrapper[4873]: W0121 00:08:28.565003 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14771d68_d459_4b1c_88a4_4a83a3141db4.slice/crio-2b7e2a02b1207049c958b300db93f536576681b5eb9fb730277439de33318b2c WatchSource:0}: Error finding container 2b7e2a02b1207049c958b300db93f536576681b5eb9fb730277439de33318b2c: Status 404 returned error can't find the container with id 2b7e2a02b1207049c958b300db93f536576681b5eb9fb730277439de33318b2c Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.566043 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-lh79l" podStartSLOduration=121.566026853 podStartE2EDuration="2m1.566026853s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:28.520162593 +0000 UTC m=+140.760030239" watchObservedRunningTime="2026-01-21 00:08:28.566026853 +0000 UTC m=+140.805894499" Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.573206 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.579262 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.583175 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mx25s"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.597147 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.601686 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:29.101611628 +0000 UTC m=+141.341479274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.601807 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.602333 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:29.102304138 +0000 UTC m=+141.342171784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.703566 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.704633 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:29.204605649 +0000 UTC m=+141.444473295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.805567 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.806182 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:29.306154339 +0000 UTC m=+141.546021985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.817897 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7d257"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.864126 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.896541 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9d5lf"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.906209 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:28 crc kubenswrapper[4873]: E0121 00:08:28.906805 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:29.406789011 +0000 UTC m=+141.646656657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.930718 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.938048 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" podStartSLOduration=120.938030307 podStartE2EDuration="2m0.938030307s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:28.887397736 +0000 UTC m=+141.127265382" watchObservedRunningTime="2026-01-21 00:08:28.938030307 +0000 UTC m=+141.177897953" Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.948147 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q2stp"] Jan 21 00:08:28 crc kubenswrapper[4873]: I0121 00:08:28.980040 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l"] Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.012602 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:29 crc kubenswrapper[4873]: E0121 00:08:29.013125 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:29.513100731 +0000 UTC m=+141.752968377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.042615 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-r2mwh" podStartSLOduration=122.042595776 podStartE2EDuration="2m2.042595776s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:29.016675057 +0000 UTC m=+141.256542703" watchObservedRunningTime="2026-01-21 00:08:29.042595776 +0000 UTC m=+141.282463422" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.042808 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bhsdr"] Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.065655 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-s5hrf" event={"ID":"12926866-b4b6-4c68-b010-e30359824005","Type":"ContainerStarted","Data":"64acf3f7b0abe7b92ef792c4a2e60a9ae4fee3ea28425ef63e889cd0661defcc"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.068504 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9gjlf" podStartSLOduration=122.068477402 podStartE2EDuration="2m2.068477402s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:29.035242817 +0000 UTC m=+141.275110463" watchObservedRunningTime="2026-01-21 00:08:29.068477402 +0000 UTC m=+141.308345048" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.079079 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-72s9c"] Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.088631 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4d6hg"] Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.089367 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" event={"ID":"ad45b560-df33-48b7-ad41-f2b562b6b682","Type":"ContainerStarted","Data":"211c747becb34152edc2c57a9f3bf917e8c53a7ece7c919d38d00c97c0a641bf"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.096043 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" event={"ID":"f224040b-f12f-43a9-a425-f971b7f1b028","Type":"ContainerStarted","Data":"acbcf6a1c5ba3466a3124847241ae1a707b6d95252781155c625ced3fd71af77"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.104462 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" event={"ID":"1d07d6a1-689c-4fb5-af20-c2ee5f54273f","Type":"ContainerStarted","Data":"81c61d94e360e3f6f0e424f0ab858484709d7c75d778cdbfc3f4dc180b6ac739"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.111731 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" event={"ID":"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd","Type":"ContainerStarted","Data":"e24b9de3ddc88195f78114c3f36a7c77fb0d5d40f879419626283d5cab89ee1f"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.113595 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:29 crc kubenswrapper[4873]: E0121 00:08:29.114149 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:29.614128255 +0000 UTC m=+141.853995911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.114304 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" event={"ID":"d659819a-df3b-470c-822e-45864643cffa","Type":"ContainerStarted","Data":"005d7e578884586f196d403e43d181085d2a8706965985e9e9ef36b0e81b053d"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.114357 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" event={"ID":"d659819a-df3b-470c-822e-45864643cffa","Type":"ContainerStarted","Data":"adf52b297a44d7254773cefeffa1022f6b0f5272bbd9a2b13973904af528d721"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.114957 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.116005 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" event={"ID":"632798b6-480c-42d9-a549-7b2e8a87b1e2","Type":"ContainerStarted","Data":"13152db6863c47a6ce1e0db440768f94c8fc521d38348a8ad3f1b3c0e1d1dd8c"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.116757 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" event={"ID":"ce32fab5-629a-4cb5-b5a7-c91b113ae67d","Type":"ContainerStarted","Data":"8a023fce6f0b36b4337c37a1a6765b3d8a7fa7fe9d44cade797be20d27fd2d79"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.117414 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" event={"ID":"14771d68-d459-4b1c-88a4-4a83a3141db4","Type":"ContainerStarted","Data":"2b7e2a02b1207049c958b300db93f536576681b5eb9fb730277439de33318b2c"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.124852 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" event={"ID":"fa9a91c2-efee-4e59-acaa-c5f236e0f857","Type":"ContainerStarted","Data":"38c56e5f14c2215b5c39381a156b80c35cf6b4c6dcc1b342acef9240d586ae04"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.125166 4873 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tvd8h container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.125504 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" podUID="d659819a-df3b-470c-822e-45864643cffa" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.135646 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.147035 4873 patch_prober.go:28] interesting pod/router-default-5444994796-s5hrf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 00:08:29 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Jan 21 00:08:29 crc kubenswrapper[4873]: [+]process-running ok Jan 21 00:08:29 crc kubenswrapper[4873]: healthz check failed Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.147137 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5hrf" podUID="12926866-b4b6-4c68-b010-e30359824005" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.180884 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qqg7r" event={"ID":"9fcbbed6-6e52-4be7-92bc-9abc19663dc6","Type":"ContainerStarted","Data":"edb1f647d86528962e0ee638f9822da016c1948a0639540255ce1eee1c984294"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.180939 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qqg7r" event={"ID":"9fcbbed6-6e52-4be7-92bc-9abc19663dc6","Type":"ContainerStarted","Data":"66ee904b82c304fc61d73c056adba77bd6a35afb005899b688f18ca088b2af86"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.194022 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" event={"ID":"8837b547-757d-45f5-b85b-d800484f3d07","Type":"ContainerStarted","Data":"3c55dee8f1c62c72edce580945b535d1b6e7bd54ac82ce28b98ddffe965e3f84"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.194072 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" event={"ID":"8837b547-757d-45f5-b85b-d800484f3d07","Type":"ContainerStarted","Data":"3eb726995a7a898b49cffcb59720623279f7fbaacac1e713181909b32bd1c890"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.195246 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.197481 4873 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6hltn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.197588 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" podUID="8837b547-757d-45f5-b85b-d800484f3d07" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.213441 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" event={"ID":"57f81905-c3d1-4eeb-83b7-4d25d73f77e5","Type":"ContainerStarted","Data":"52ee798e5e9c453db9e59050fec0c77bee9387af77f33cbc5f075b808a86a8c4"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.217373 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:29 crc kubenswrapper[4873]: E0121 00:08:29.218515 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:29.718503318 +0000 UTC m=+141.958370964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.233405 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29482560-5mcvl" event={"ID":"8e08f95c-603b-4967-a545-b4cc31eeca6d","Type":"ContainerStarted","Data":"903774dedd0e6d5339b54a70ee44ffa1b6aa2192c3974d07a6c32ca5364fa17f"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.271306 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" event={"ID":"5320d128-0e99-479c-bb4e-edf54938ea82","Type":"ContainerStarted","Data":"c8ddf8be8aa0e58966c1072ff40a1dda68679fdf50cec09bbbecb91752c6c21c"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.325482 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:29 crc kubenswrapper[4873]: E0121 00:08:29.326566 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:29.826529079 +0000 UTC m=+142.066396725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.345168 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" event={"ID":"484fd929-7fa0-4ab0-95d0-fbbb122c255a","Type":"ContainerStarted","Data":"72ce2c020623a1ae0055aa5255269c6661374626b9b9c225f46da00bc4a9511f"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.376782 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" event={"ID":"1bbf75ab-d6db-4c51-a850-bb383d55d2c3","Type":"ContainerStarted","Data":"ef8f47bf3933795676bfe5861177a6db4e70902318340c7d73eb18d032fd77fa"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.400614 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zffj8" event={"ID":"4e56ba3a-006a-4ae3-ae87-5c9babf78867","Type":"ContainerStarted","Data":"b5beb9493a0217c51034d628657c6110a3733833cf2da4b62f61a1a3d3055890"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.427118 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:29 crc kubenswrapper[4873]: E0121 00:08:29.427929 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:29.927908873 +0000 UTC m=+142.167776519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.445417 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" event={"ID":"62c1a236-f69b-401a-9340-c57b301f0657","Type":"ContainerStarted","Data":"2da809caf2980e07760876f275f69f525bfc28cde64af8233fb0a097a2d2425c"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.445471 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" event={"ID":"62c1a236-f69b-401a-9340-c57b301f0657","Type":"ContainerStarted","Data":"311e6cd853d056094323350e5968ff5aa48b5f2c19eb7e872d1614157aaad18e"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.467922 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" event={"ID":"d9676863-bb22-4886-bed1-f22b6aa37f90","Type":"ContainerStarted","Data":"a76edd37f6baaa1ad332e110e197f3372b081fcfaff72f9c91891091992f65e2"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.502404 4873 generic.go:334] "Generic (PLEG): container finished" podID="0a5e6054-8bdc-4431-914c-ba885604a20b" containerID="5b1b4ae207597ca450dc2c50f780434d8e76e2e7f10951c1ba637e9533a0dd24" exitCode=0 Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.502501 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" event={"ID":"0a5e6054-8bdc-4431-914c-ba885604a20b","Type":"ContainerDied","Data":"5b1b4ae207597ca450dc2c50f780434d8e76e2e7f10951c1ba637e9533a0dd24"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.524480 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" event={"ID":"20b87797-f91d-4f1a-b0d7-febdafa8e7ba","Type":"ContainerStarted","Data":"3c05f627b89df706c21431a67d1bde0de9a5b9eaa8faa61fb0bc47c183731089"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.528370 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:29 crc kubenswrapper[4873]: E0121 00:08:29.529432 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:30.029415511 +0000 UTC m=+142.269283157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.554949 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc" event={"ID":"aeb6b18b-cc94-4a8a-a970-e39447726765","Type":"ContainerStarted","Data":"327235c20005f4cc506fb04ad6c753f5f93e0dc8ee26f633f08144bffe6e4694"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.561304 4873 csr.go:261] certificate signing request csr-mnwt5 is approved, waiting to be issued Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.572814 4873 csr.go:257] certificate signing request csr-mnwt5 is issued Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.584351 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" event={"ID":"a87004d0-0618-4970-93d8-14326161f16a","Type":"ContainerStarted","Data":"445d2ae86bf674733c8c2b18d27078b0fe271b3aa15da7e32454c36d99331722"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.609857 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" event={"ID":"4358e352-a966-4c5c-9f6e-fb7f11616026","Type":"ContainerStarted","Data":"44065aa3569b3aaf6879eb1f234ec4ec6f03e604d7f6a30a51f9c65e9a67ccac"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.633360 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:29 crc kubenswrapper[4873]: E0121 00:08:29.633680 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:30.133668201 +0000 UTC m=+142.373535847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:29 crc kubenswrapper[4873]: W0121 00:08:29.683735 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb60f95eb_22b2_47e1_8c88_71ff19f08465.slice/crio-50531179dc5371a201a85ee632be3130d3fdf38c997cb525e89d90d61abb2258 WatchSource:0}: Error finding container 50531179dc5371a201a85ee632be3130d3fdf38c997cb525e89d90d61abb2258: Status 404 returned error can't find the container with id 50531179dc5371a201a85ee632be3130d3fdf38c997cb525e89d90d61abb2258 Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.689039 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" event={"ID":"6d66f48c-9b12-4b28-aecf-455bc79f6ff0","Type":"ContainerStarted","Data":"b8bbc78c290098ba15839d64f286ab1e471e860f20444f540a72aa83a22fb28a"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.689149 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" event={"ID":"6d66f48c-9b12-4b28-aecf-455bc79f6ff0","Type":"ContainerStarted","Data":"62e38389819b85e9b02d65cf96ef7898fd26ce7acb3e37823d8dc50b36f4a888"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.709780 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" event={"ID":"fdd8df50-7130-49f2-b68f-6cb7c95a45de","Type":"ContainerStarted","Data":"360c0791ba7f41c5b5573eb18b702c7e65b4518853ff062586c4402ebd95be98"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.735205 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:29 crc kubenswrapper[4873]: E0121 00:08:29.736795 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:30.236767175 +0000 UTC m=+142.476634831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.769482 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" event={"ID":"8afe39b0-54e9-4f29-98a3-52ec15ffcb6f","Type":"ContainerStarted","Data":"da652933e169193e6ea123c9a8684b05f1c651faebc25b5d7f3c76643166356c"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.799489 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zffj8" podStartSLOduration=122.799469944 podStartE2EDuration="2m2.799469944s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:29.79833316 +0000 UTC m=+142.038200806" watchObservedRunningTime="2026-01-21 00:08:29.799469944 +0000 UTC m=+142.039337590" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.818185 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" event={"ID":"d8c5454c-78f0-4b3e-a5d8-50319b08070d","Type":"ContainerStarted","Data":"a9c32412789155d4d3bacab94807fce448cc1d402b9db0a4ce68830ef5b82eff"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.818232 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" event={"ID":"d8c5454c-78f0-4b3e-a5d8-50319b08070d","Type":"ContainerStarted","Data":"da31a528c4c93902f9430077dac6920b79a49061fcb31733d7e3c89bd61c806c"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.819157 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.834486 4873 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7tbgp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.834537 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" podUID="d8c5454c-78f0-4b3e-a5d8-50319b08070d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.837377 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:29 crc kubenswrapper[4873]: E0121 00:08:29.838654 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:30.338642934 +0000 UTC m=+142.578510580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.838977 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" event={"ID":"98561363-dd26-4b91-86a3-61f13886ce2c","Type":"ContainerStarted","Data":"5f1521474f7fb7e23e1a0c911cb73d7aa40c8dd79473867fc7a3b4d968ba2ac6"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.846007 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" podStartSLOduration=121.845984462 podStartE2EDuration="2m1.845984462s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:29.834964385 +0000 UTC m=+142.074832031" watchObservedRunningTime="2026-01-21 00:08:29.845984462 +0000 UTC m=+142.085852108" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.865224 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z8d7g" podStartSLOduration=121.865175721 podStartE2EDuration="2m1.865175721s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:29.863501741 +0000 UTC m=+142.103369387" watchObservedRunningTime="2026-01-21 00:08:29.865175721 +0000 UTC m=+142.105043367" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.904201 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" event={"ID":"d6947adf-bb59-4009-b65e-76b873d5fb18","Type":"ContainerStarted","Data":"21d52b8965ee4f78c5a4e06c96169c12ab952bd5305624f515ae6788392ce0e1"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.904779 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" event={"ID":"d6947adf-bb59-4009-b65e-76b873d5fb18","Type":"ContainerStarted","Data":"33f71652c4f3eb2d03f87bfd4d66c7b11e0b7c562c6b4b4dfa2b2f0b6f238195"} Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.905334 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gjlf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.905382 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9gjlf" podUID="110863cb-5af1-4e46-9e40-9831dfa25875" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.939249 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.943673 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-r2mwh" Jan 21 00:08:29 crc kubenswrapper[4873]: E0121 00:08:29.945208 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:30.445192502 +0000 UTC m=+142.685060138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.949061 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" podStartSLOduration=121.949049146 podStartE2EDuration="2m1.949049146s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:29.890075869 +0000 UTC m=+142.129943515" watchObservedRunningTime="2026-01-21 00:08:29.949049146 +0000 UTC m=+142.188916792" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.949312 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" podStartSLOduration=122.949305694 podStartE2EDuration="2m2.949305694s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:29.944038417 +0000 UTC m=+142.183906063" watchObservedRunningTime="2026-01-21 00:08:29.949305694 +0000 UTC m=+142.189173340" Jan 21 00:08:29 crc kubenswrapper[4873]: I0121 00:08:29.975816 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cznck" podStartSLOduration=122.975796229 podStartE2EDuration="2m2.975796229s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:29.974094878 +0000 UTC m=+142.213962535" watchObservedRunningTime="2026-01-21 00:08:29.975796229 +0000 UTC m=+142.215663875" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.004834 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ls7nr" podStartSLOduration=123.004821329 podStartE2EDuration="2m3.004821329s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:30.003952513 +0000 UTC m=+142.243820159" watchObservedRunningTime="2026-01-21 00:08:30.004821329 +0000 UTC m=+142.244688965" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.030174 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xgml2" podStartSLOduration=123.03015923 podStartE2EDuration="2m3.03015923s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:30.027996235 +0000 UTC m=+142.267863881" watchObservedRunningTime="2026-01-21 00:08:30.03015923 +0000 UTC m=+142.270026866" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.042575 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.042890 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:30.542877927 +0000 UTC m=+142.782745573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.070280 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-s5hrf" podStartSLOduration=122.070267799 podStartE2EDuration="2m2.070267799s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:30.069939698 +0000 UTC m=+142.309807344" watchObservedRunningTime="2026-01-21 00:08:30.070267799 +0000 UTC m=+142.310135445" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.106104 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29482560-5mcvl" podStartSLOduration=123.106081289 podStartE2EDuration="2m3.106081289s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:30.100880305 +0000 UTC m=+142.340747951" watchObservedRunningTime="2026-01-21 00:08:30.106081289 +0000 UTC m=+142.345948935" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.140230 4873 patch_prober.go:28] interesting pod/router-default-5444994796-s5hrf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 00:08:30 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Jan 21 00:08:30 crc kubenswrapper[4873]: [+]process-running ok Jan 21 00:08:30 crc kubenswrapper[4873]: healthz check failed Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.140277 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5hrf" podUID="12926866-b4b6-4c68-b010-e30359824005" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.141469 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" podStartSLOduration=122.141453997 podStartE2EDuration="2m2.141453997s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:30.140350165 +0000 UTC m=+142.380217811" watchObservedRunningTime="2026-01-21 00:08:30.141453997 +0000 UTC m=+142.381321643" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.146603 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.146881 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:30.646867208 +0000 UTC m=+142.886734854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.190601 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qqg7r" podStartSLOduration=6.190583753 podStartE2EDuration="6.190583753s" podCreationTimestamp="2026-01-21 00:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:30.186160142 +0000 UTC m=+142.426027798" watchObservedRunningTime="2026-01-21 00:08:30.190583753 +0000 UTC m=+142.430451399" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.237954 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.238133 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.244766 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" podStartSLOduration=123.244752449 podStartE2EDuration="2m3.244752449s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:30.242302656 +0000 UTC m=+142.482170302" watchObservedRunningTime="2026-01-21 00:08:30.244752449 +0000 UTC m=+142.484620095" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.255276 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.255618 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:30.755606481 +0000 UTC m=+142.995474127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.287743 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7kfzs" podStartSLOduration=123.287722342 podStartE2EDuration="2m3.287722342s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:30.286040033 +0000 UTC m=+142.525907679" watchObservedRunningTime="2026-01-21 00:08:30.287722342 +0000 UTC m=+142.527589988" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.327313 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6qmv" podStartSLOduration=122.327299185 podStartE2EDuration="2m2.327299185s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:30.325308836 +0000 UTC m=+142.565176482" watchObservedRunningTime="2026-01-21 00:08:30.327299185 +0000 UTC m=+142.567166831" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.356985 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.372774 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:30.872725342 +0000 UTC m=+143.112592988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.407408 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" podStartSLOduration=123.407388738 podStartE2EDuration="2m3.407388738s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:30.403007188 +0000 UTC m=+142.642874834" watchObservedRunningTime="2026-01-21 00:08:30.407388738 +0000 UTC m=+142.647256384" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.473540 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.473910 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:30.973899269 +0000 UTC m=+143.213766905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.574071 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 00:03:29 +0000 UTC, rotation deadline is 2026-11-16 19:07:07.298630434 +0000 UTC Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.574124 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7194h58m36.724508155s for next certificate rotation Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.574466 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.574893 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.074874811 +0000 UTC m=+143.314742457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.676069 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.676432 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.1764152 +0000 UTC m=+143.416282846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.777035 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.777167 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.277148265 +0000 UTC m=+143.517015911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.777534 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.777798 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.277787614 +0000 UTC m=+143.517655260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.878314 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.878518 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.378497129 +0000 UTC m=+143.618364775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.878762 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.879269 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.379262521 +0000 UTC m=+143.619130167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.907889 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6q5pv"] Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.908944 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.913725 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.943416 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6q5pv"] Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.960529 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" event={"ID":"7064ffc6-970d-4592-a979-ed7fd110cbc8","Type":"ContainerStarted","Data":"34c426d007419ccd6b905584dd3f993f35029f0b6e0b6936b7f5b528c0c83468"} Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.979643 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.980006 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-catalog-content\") pod \"certified-operators-6q5pv\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.980082 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx9hj\" (UniqueName: \"kubernetes.io/projected/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-kube-api-access-xx9hj\") pod \"certified-operators-6q5pv\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.980111 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-utilities\") pod \"certified-operators-6q5pv\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:30 crc kubenswrapper[4873]: E0121 00:08:30.980215 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.480200662 +0000 UTC m=+143.720068308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.987400 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" event={"ID":"01bd6a42-c0f6-4193-b4d3-3366e082a486","Type":"ContainerStarted","Data":"14744317207a9a1cc4002768462aecd4a557cdfc7e41663ff853318f6ba6e799"} Jan 21 00:08:30 crc kubenswrapper[4873]: I0121 00:08:30.987440 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" event={"ID":"01bd6a42-c0f6-4193-b4d3-3366e082a486","Type":"ContainerStarted","Data":"48bc003b77da11521194b91bdb1ae7fc45ae371418e657aa258a6e07ec84ea2b"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.028896 4873 patch_prober.go:28] interesting pod/apiserver-76f77b778f-n7fpv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]log ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]etcd ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]poststarthook/max-in-flight-filter ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 00:08:31 crc kubenswrapper[4873]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 00:08:31 crc kubenswrapper[4873]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 21 00:08:31 crc kubenswrapper[4873]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]poststarthook/openshift.io-startinformers ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 00:08:31 crc kubenswrapper[4873]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 00:08:31 crc kubenswrapper[4873]: livez check failed Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.028944 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" podUID="6d66f48c-9b12-4b28-aecf-455bc79f6ff0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.064117 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.080883 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:31 crc kubenswrapper[4873]: E0121 00:08:31.081150 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.581138153 +0000 UTC m=+143.821005799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.081686 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx9hj\" (UniqueName: \"kubernetes.io/projected/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-kube-api-access-xx9hj\") pod \"certified-operators-6q5pv\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.081761 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-utilities\") pod \"certified-operators-6q5pv\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.081889 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-catalog-content\") pod \"certified-operators-6q5pv\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.083915 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-catalog-content\") pod \"certified-operators-6q5pv\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.084295 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-utilities\") pod \"certified-operators-6q5pv\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.110286 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" event={"ID":"fa9a91c2-efee-4e59-acaa-c5f236e0f857","Type":"ContainerStarted","Data":"2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.112609 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.114522 4873 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7d257 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.114579 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" podUID="fa9a91c2-efee-4e59-acaa-c5f236e0f857" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.135119 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xv5c9"] Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.136082 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.144411 4873 patch_prober.go:28] interesting pod/router-default-5444994796-s5hrf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 00:08:31 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Jan 21 00:08:31 crc kubenswrapper[4873]: [+]process-running ok Jan 21 00:08:31 crc kubenswrapper[4873]: healthz check failed Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.149176 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc" event={"ID":"aeb6b18b-cc94-4a8a-a970-e39447726765","Type":"ContainerStarted","Data":"cf009eb425260880b94c928679294f0b45997d6bc38662c41aa96c362f60a69f"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.155676 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5hrf" podUID="12926866-b4b6-4c68-b010-e30359824005" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.158158 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-72s9c" event={"ID":"b60f95eb-22b2-47e1-8c88-71ff19f08465","Type":"ContainerStarted","Data":"50531179dc5371a201a85ee632be3130d3fdf38c997cb525e89d90d61abb2258"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.182797 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.186492 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.186734 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-catalog-content\") pod \"community-operators-xv5c9\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.186975 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-utilities\") pod \"community-operators-xv5c9\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.187143 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwmt2\" (UniqueName: \"kubernetes.io/projected/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-kube-api-access-nwmt2\") pod \"community-operators-xv5c9\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: E0121 00:08:31.188111 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.688094783 +0000 UTC m=+143.927962419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.193394 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" event={"ID":"ad45b560-df33-48b7-ad41-f2b562b6b682","Type":"ContainerStarted","Data":"58516f57b7273bf40b752b2d245487786c3e47fb909d4e00be05cc7fb8ca40cb"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.194465 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.198694 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xv5c9"] Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.216329 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx9hj\" (UniqueName: \"kubernetes.io/projected/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-kube-api-access-xx9hj\") pod \"certified-operators-6q5pv\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.221866 4873 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jv27r container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.221928 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" podUID="ad45b560-df33-48b7-ad41-f2b562b6b682" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.244432 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" podStartSLOduration=124.244406072 podStartE2EDuration="2m4.244406072s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.22208903 +0000 UTC m=+143.461956676" watchObservedRunningTime="2026-01-21 00:08:31.244406072 +0000 UTC m=+143.484273728" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.274809 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" podStartSLOduration=124.274792602 podStartE2EDuration="2m4.274792602s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.271297898 +0000 UTC m=+143.511165544" watchObservedRunningTime="2026-01-21 00:08:31.274792602 +0000 UTC m=+143.514660248" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.275120 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qjfd5"] Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.277627 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.277805 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.302928 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.302982 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-catalog-content\") pod \"community-operators-xv5c9\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.303106 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-utilities\") pod \"community-operators-xv5c9\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.303167 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwmt2\" (UniqueName: \"kubernetes.io/projected/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-kube-api-access-nwmt2\") pod \"community-operators-xv5c9\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.315118 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" event={"ID":"d9676863-bb22-4886-bed1-f22b6aa37f90","Type":"ContainerStarted","Data":"5a72702c741cb400557e76768e991b54d9e12a34afa94ddf1fe8e5b6aae94425"} Jan 21 00:08:31 crc kubenswrapper[4873]: E0121 00:08:31.321501 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.821485826 +0000 UTC m=+144.061353472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.321972 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-catalog-content\") pod \"community-operators-xv5c9\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.326265 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjfd5"] Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.333614 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-utilities\") pod \"community-operators-xv5c9\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.343915 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" podStartSLOduration=124.34389429 podStartE2EDuration="2m4.34389429s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.305828192 +0000 UTC m=+143.545695868" watchObservedRunningTime="2026-01-21 00:08:31.34389429 +0000 UTC m=+143.583761936" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.391200 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwmt2\" (UniqueName: \"kubernetes.io/projected/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-kube-api-access-nwmt2\") pod \"community-operators-xv5c9\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.406182 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.406370 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx59p\" (UniqueName: \"kubernetes.io/projected/c2aa2871-6143-459b-9607-1fdde2f2f22c-kube-api-access-nx59p\") pod \"certified-operators-qjfd5\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.406391 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-catalog-content\") pod \"certified-operators-qjfd5\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: E0121 00:08:31.406507 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:31.906477684 +0000 UTC m=+144.146345330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.406631 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-utilities\") pod \"certified-operators-qjfd5\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.416307 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" event={"ID":"467315c7-9b10-4312-b8d1-25f6bf3c48c9","Type":"ContainerStarted","Data":"16b50a2f7c92467ee317de1517ce1733a51a4c5df19b88f4569d1e3855ee2a67"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.417511 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" event={"ID":"467315c7-9b10-4312-b8d1-25f6bf3c48c9","Type":"ContainerStarted","Data":"9822438ac76ab94ec0856e1f0ff914592da00e33352f3d48050bde81628678e9"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.455315 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" podStartSLOduration=123.455299861 podStartE2EDuration="2m3.455299861s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.454805536 +0000 UTC m=+143.694673182" watchObservedRunningTime="2026-01-21 00:08:31.455299861 +0000 UTC m=+143.695167507" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.471015 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" event={"ID":"1d07d6a1-689c-4fb5-af20-c2ee5f54273f","Type":"ContainerStarted","Data":"6e963590edccf868e8c251631d45b42b9cb9c0869169d4c0e6ded1c7e9c9236c"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.474773 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbxjc"] Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.484475 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.486962 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4d6hg" podStartSLOduration=123.486949698 podStartE2EDuration="2m3.486949698s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.486343011 +0000 UTC m=+143.726210667" watchObservedRunningTime="2026-01-21 00:08:31.486949698 +0000 UTC m=+143.726817344" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.503421 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbxjc"] Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.505445 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jbpj7" event={"ID":"484fd929-7fa0-4ab0-95d0-fbbb122c255a","Type":"ContainerStarted","Data":"fd4f6df23b36594c801f07f5b6413c3d31b98ccce4dc8cc41d6431dbbaa88f7a"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.508190 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx59p\" (UniqueName: \"kubernetes.io/projected/c2aa2871-6143-459b-9607-1fdde2f2f22c-kube-api-access-nx59p\") pod \"certified-operators-qjfd5\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.508250 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-catalog-content\") pod \"certified-operators-qjfd5\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.508283 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.508328 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-utilities\") pod \"certified-operators-qjfd5\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.509045 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-utilities\") pod \"certified-operators-qjfd5\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.509387 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-catalog-content\") pod \"certified-operators-qjfd5\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: E0121 00:08:31.509737 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.009721234 +0000 UTC m=+144.249588960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.515357 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.524752 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-sqq6z" podStartSLOduration=124.524736398 podStartE2EDuration="2m4.524736398s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.524229323 +0000 UTC m=+143.764096969" watchObservedRunningTime="2026-01-21 00:08:31.524736398 +0000 UTC m=+143.764604044" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.554251 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9d5lf" event={"ID":"b58f6ba2-2455-47b0-82db-1b79fe74520e","Type":"ContainerStarted","Data":"1ce593af9eb46c6795cf863d4f5aee5c364ca1c4db55f9826c0db49592490689"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.562619 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx59p\" (UniqueName: \"kubernetes.io/projected/c2aa2871-6143-459b-9607-1fdde2f2f22c-kube-api-access-nx59p\") pod \"certified-operators-qjfd5\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.602446 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" event={"ID":"98561363-dd26-4b91-86a3-61f13886ce2c","Type":"ContainerStarted","Data":"7a40d89534a5bf864e36e5f64128e31585b3c4f04af1c95b593fe5db09055645"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.603167 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.610290 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.610503 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6cw\" (UniqueName: \"kubernetes.io/projected/c0fd5454-ac0d-4873-a5cc-d690883223a4-kube-api-access-cz6cw\") pod \"community-operators-vbxjc\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.610594 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-catalog-content\") pod \"community-operators-vbxjc\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.610636 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-utilities\") pod \"community-operators-vbxjc\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: E0121 00:08:31.610751 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.110735707 +0000 UTC m=+144.350603353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.614814 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.631471 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.631519 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.638105 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" podStartSLOduration=123.638088477 podStartE2EDuration="2m3.638088477s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.636648524 +0000 UTC m=+143.876516170" watchObservedRunningTime="2026-01-21 00:08:31.638088477 +0000 UTC m=+143.877956123" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.639261 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9d5lf" podStartSLOduration=7.639256782 podStartE2EDuration="7.639256782s" podCreationTimestamp="2026-01-21 00:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.602726219 +0000 UTC m=+143.842593865" watchObservedRunningTime="2026-01-21 00:08:31.639256782 +0000 UTC m=+143.879124428" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.669513 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" event={"ID":"20b87797-f91d-4f1a-b0d7-febdafa8e7ba","Type":"ContainerStarted","Data":"e280aa140ecccf0a2c84d4e15273f53e55ef395fc3b3c03498cb3dfc9d9bc3e8"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.712218 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz6cw\" (UniqueName: \"kubernetes.io/projected/c0fd5454-ac0d-4873-a5cc-d690883223a4-kube-api-access-cz6cw\") pod \"community-operators-vbxjc\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.712468 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.712492 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-catalog-content\") pod \"community-operators-vbxjc\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.712526 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-utilities\") pod \"community-operators-vbxjc\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: E0121 00:08:31.714014 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.214003797 +0000 UTC m=+144.453871443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.714349 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-catalog-content\") pod \"community-operators-vbxjc\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.714691 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-utilities\") pod \"community-operators-vbxjc\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.717405 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" podStartSLOduration=124.717379547 podStartE2EDuration="2m4.717379547s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.711051629 +0000 UTC m=+143.950919275" watchObservedRunningTime="2026-01-21 00:08:31.717379547 +0000 UTC m=+143.957247203" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.746171 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz6cw\" (UniqueName: \"kubernetes.io/projected/c0fd5454-ac0d-4873-a5cc-d690883223a4-kube-api-access-cz6cw\") pod \"community-operators-vbxjc\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.773641 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" podStartSLOduration=123.773628574 podStartE2EDuration="2m3.773628574s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.773290144 +0000 UTC m=+144.013157790" watchObservedRunningTime="2026-01-21 00:08:31.773628574 +0000 UTC m=+144.013496220" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.773760 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p2cz2" event={"ID":"a87004d0-0618-4970-93d8-14326161f16a","Type":"ContainerStarted","Data":"642a1f96b3416a7b7f21f6f545407f398d2e13d3ac4ba2c05858a6fe86774a09"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.812863 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.813955 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" event={"ID":"14771d68-d459-4b1c-88a4-4a83a3141db4","Type":"ContainerStarted","Data":"7f9115c03bf31c43b9371ed86e0009b5eedb5ff6aa8fac3172e02066e2d7c582"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.814717 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:31 crc kubenswrapper[4873]: E0121 00:08:31.815678 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.315664499 +0000 UTC m=+144.555532135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.832393 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" podStartSLOduration=123.832378155 podStartE2EDuration="2m3.832378155s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.831966363 +0000 UTC m=+144.071834009" watchObservedRunningTime="2026-01-21 00:08:31.832378155 +0000 UTC m=+144.072245801" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.870719 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q2stp" event={"ID":"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5","Type":"ContainerStarted","Data":"894f778a3c65e1bc86ab6ed700c37b4de3c2adb20b28ccf382ff6636d95528b7"} Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.881765 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wqg68" podStartSLOduration=123.881750228 podStartE2EDuration="2m3.881750228s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.879949404 +0000 UTC m=+144.119817050" watchObservedRunningTime="2026-01-21 00:08:31.881750228 +0000 UTC m=+144.121617874" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.890705 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.900715 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tvd8h" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.912891 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" podStartSLOduration=123.912874811 podStartE2EDuration="2m3.912874811s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:31.909968924 +0000 UTC m=+144.149836570" watchObservedRunningTime="2026-01-21 00:08:31.912874811 +0000 UTC m=+144.152742457" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.941180 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hltn" Jan 21 00:08:31 crc kubenswrapper[4873]: I0121 00:08:31.942032 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:31 crc kubenswrapper[4873]: E0121 00:08:31.949370 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.449350851 +0000 UTC m=+144.689218487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.039231 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6q5pv"] Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.043326 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.043452 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.543437429 +0000 UTC m=+144.783305075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.043639 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.043946 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.543938964 +0000 UTC m=+144.783806610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.141435 4873 patch_prober.go:28] interesting pod/router-default-5444994796-s5hrf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 00:08:32 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Jan 21 00:08:32 crc kubenswrapper[4873]: [+]process-running ok Jan 21 00:08:32 crc kubenswrapper[4873]: healthz check failed Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.141485 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5hrf" podUID="12926866-b4b6-4c68-b010-e30359824005" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.146254 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.146576 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.646543314 +0000 UTC m=+144.886410960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.253056 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.253429 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.753413352 +0000 UTC m=+144.993281048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.356993 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.357322 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.8573074 +0000 UTC m=+145.097175046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.402701 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xv5c9"] Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.461168 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.461515 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:32.961504057 +0000 UTC m=+145.201371703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.506568 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjfd5"] Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.561761 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.562132 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.062117419 +0000 UTC m=+145.301985065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.600929 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbxjc"] Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.663149 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.663761 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.163744871 +0000 UTC m=+145.403612517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.764114 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.764308 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.26427908 +0000 UTC m=+145.504146726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.864687 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hkdsj"] Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.865911 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.866202 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.866506 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.366495749 +0000 UTC m=+145.606363395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.867846 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.889725 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkdsj"] Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.923611 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" event={"ID":"8afe39b0-54e9-4f29-98a3-52ec15ffcb6f","Type":"ContainerStarted","Data":"4f3df57eba9886f1e998316caf095c70158bc262c15110e51a900191b2de74ee"} Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.923650 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" event={"ID":"8afe39b0-54e9-4f29-98a3-52ec15ffcb6f","Type":"ContainerStarted","Data":"3e7b350117e0cfad1788f940d8a1741780fa70c90c1d5e168b19e73d01c8c728"} Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.943941 4873 generic.go:334] "Generic (PLEG): container finished" podID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerID="4e2aedb62c213494362c2ba580d43cedd47d2e60441d87305f6ae1aefb9f96f5" exitCode=0 Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.944008 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q5pv" event={"ID":"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc","Type":"ContainerDied","Data":"4e2aedb62c213494362c2ba580d43cedd47d2e60441d87305f6ae1aefb9f96f5"} Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.944033 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q5pv" event={"ID":"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc","Type":"ContainerStarted","Data":"43aba0551423c93174b018b7d71b8b0d4fac6f6c9999d327b53292339de776bd"} Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.945421 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.953776 4873 generic.go:334] "Generic (PLEG): container finished" podID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerID="c4f61160c84c0acb94d2278286de576b8176311f2ba698ebaa91d9e4666ddd71" exitCode=0 Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.953879 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xv5c9" event={"ID":"e16d4483-cf0d-4977-bad2-ed10f6dde4c7","Type":"ContainerDied","Data":"c4f61160c84c0acb94d2278286de576b8176311f2ba698ebaa91d9e4666ddd71"} Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.953906 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xv5c9" event={"ID":"e16d4483-cf0d-4977-bad2-ed10f6dde4c7","Type":"ContainerStarted","Data":"a207b0e925650e1d58067a7c087d8514508c3a7589c193bfe53d82c25e12f7cd"} Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.964793 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8l49" podStartSLOduration=125.964778402 podStartE2EDuration="2m5.964778402s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:32.963245096 +0000 UTC m=+145.203112742" watchObservedRunningTime="2026-01-21 00:08:32.964778402 +0000 UTC m=+145.204646048" Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.967594 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbxjc" event={"ID":"c0fd5454-ac0d-4873-a5cc-d690883223a4","Type":"ContainerStarted","Data":"d3813b9cb5a4d7a06a63c525b7331d0c90775cd816eaa8884193190cc88269de"} Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.967860 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.968018 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-utilities\") pod \"redhat-marketplace-hkdsj\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.968061 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-catalog-content\") pod \"redhat-marketplace-hkdsj\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.968097 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqlh7\" (UniqueName: \"kubernetes.io/projected/bca22ae0-3824-47b7-8520-f7b7f0f657ab-kube-api-access-tqlh7\") pod \"redhat-marketplace-hkdsj\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:32 crc kubenswrapper[4873]: E0121 00:08:32.968153 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.468140721 +0000 UTC m=+145.708008367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.977932 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q2stp" event={"ID":"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5","Type":"ContainerStarted","Data":"9831df5318c703033d7fded5eb35b96f2b5f5dccc7e1ebc90feeb4448d5a0ddb"} Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.977985 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q2stp" event={"ID":"e2ae5418-a2f2-41d8-b834-ac62b21dc1e5","Type":"ContainerStarted","Data":"1cb4a9355f5f3d6bebd5cb6d91bb0c477b4ce31b54e40d86fbca5dbd5f7240f6"} Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.978654 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.981102 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" event={"ID":"7064ffc6-970d-4592-a979-ed7fd110cbc8","Type":"ContainerStarted","Data":"c5c752c1bbcfdc7a680c325720bbfa5744cd18a32522ab71b8c1987385cd1311"} Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.982018 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.993377 4873 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bhsdr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 21 00:08:32 crc kubenswrapper[4873]: I0121 00:08:32.993427 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" podUID="7064ffc6-970d-4592-a979-ed7fd110cbc8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.020249 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" event={"ID":"0a5e6054-8bdc-4431-914c-ba885604a20b","Type":"ContainerStarted","Data":"4459b11085ad362ad9f01a74bd6c3b932d998eccaa0d3c4a7e9a70c03723fe55"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.045689 4873 generic.go:334] "Generic (PLEG): container finished" podID="01bd6a42-c0f6-4193-b4d3-3366e082a486" containerID="14744317207a9a1cc4002768462aecd4a557cdfc7e41663ff853318f6ba6e799" exitCode=0 Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.045775 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" event={"ID":"01bd6a42-c0f6-4193-b4d3-3366e082a486","Type":"ContainerDied","Data":"14744317207a9a1cc4002768462aecd4a557cdfc7e41663ff853318f6ba6e799"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.055285 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ch9hg" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.068644 4873 generic.go:334] "Generic (PLEG): container finished" podID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerID="27679cd03a138dd9c89b2224a6d729b08b3e1952bbd7d24538da5a526bb533af" exitCode=0 Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.068732 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjfd5" event={"ID":"c2aa2871-6143-459b-9607-1fdde2f2f22c","Type":"ContainerDied","Data":"27679cd03a138dd9c89b2224a6d729b08b3e1952bbd7d24538da5a526bb533af"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.068755 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjfd5" event={"ID":"c2aa2871-6143-459b-9607-1fdde2f2f22c","Type":"ContainerStarted","Data":"cd1e6c918404b00a676b1bca8903bbd867858d6a3fc504fd94a39ade4c75dbd0"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.069356 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-utilities\") pod \"redhat-marketplace-hkdsj\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.069414 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-catalog-content\") pod \"redhat-marketplace-hkdsj\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.069485 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqlh7\" (UniqueName: \"kubernetes.io/projected/bca22ae0-3824-47b7-8520-f7b7f0f657ab-kube-api-access-tqlh7\") pod \"redhat-marketplace-hkdsj\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.069543 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:33 crc kubenswrapper[4873]: E0121 00:08:33.072174 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.572163944 +0000 UTC m=+145.812031590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.073484 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-catalog-content\") pod \"redhat-marketplace-hkdsj\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.074997 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-utilities\") pod \"redhat-marketplace-hkdsj\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.091533 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqlh7\" (UniqueName: \"kubernetes.io/projected/bca22ae0-3824-47b7-8520-f7b7f0f657ab-kube-api-access-tqlh7\") pod \"redhat-marketplace-hkdsj\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.134437 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" event={"ID":"d9676863-bb22-4886-bed1-f22b6aa37f90","Type":"ContainerStarted","Data":"fdbddc935f93b55accacba1d2c450d9eb505fd2a1c449be2b13991d56ceb6e4a"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.145598 4873 patch_prober.go:28] interesting pod/router-default-5444994796-s5hrf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 00:08:33 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Jan 21 00:08:33 crc kubenswrapper[4873]: [+]process-running ok Jan 21 00:08:33 crc kubenswrapper[4873]: healthz check failed Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.145661 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5hrf" podUID="12926866-b4b6-4c68-b010-e30359824005" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.154873 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-72s9c" event={"ID":"b60f95eb-22b2-47e1-8c88-71ff19f08465","Type":"ContainerStarted","Data":"b85ee70d47f03e6157fe002f7f088c2748a679bb42c5e4d2a3e98e7c9f840f80"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.170050 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:33 crc kubenswrapper[4873]: E0121 00:08:33.170967 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.670953741 +0000 UTC m=+145.910821387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.173299 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" event={"ID":"632798b6-480c-42d9-a549-7b2e8a87b1e2","Type":"ContainerStarted","Data":"c2e415f6fffc4e2a83a64c09e3fdc23fca7b8539aff068185793a350792369b4"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.173343 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" event={"ID":"632798b6-480c-42d9-a549-7b2e8a87b1e2","Type":"ContainerStarted","Data":"2df70046cf0a2201362ed239a3dae6f75957825226800cc25a371a2fcfd0555a"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.185140 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" podStartSLOduration=125.185122341 podStartE2EDuration="2m5.185122341s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:33.161431519 +0000 UTC m=+145.401299155" watchObservedRunningTime="2026-01-21 00:08:33.185122341 +0000 UTC m=+145.424989987" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.186247 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-q2stp" podStartSLOduration=9.186241294 podStartE2EDuration="9.186241294s" podCreationTimestamp="2026-01-21 00:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:33.184922685 +0000 UTC m=+145.424790331" watchObservedRunningTime="2026-01-21 00:08:33.186241294 +0000 UTC m=+145.426108940" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.190698 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6rk8p" event={"ID":"45ee4dd9-2045-4c76-9ddd-8f0a81f148dd","Type":"ContainerStarted","Data":"9dce787150c1012c318475946c86e5ed1e3e1c8d23ec29e100bf74d69a1fa21e"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.205895 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.253470 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-prftc" event={"ID":"fdd8df50-7130-49f2-b68f-6cb7c95a45de","Type":"ContainerStarted","Data":"f245f667be1455678537ff99c732ffc3491997ea794422935bade07c0e3aaac0"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.271424 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:33 crc kubenswrapper[4873]: E0121 00:08:33.272429 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.772416618 +0000 UTC m=+146.012284264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.280882 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n2txz"] Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.282114 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.303371 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2txz"] Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.315818 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc" event={"ID":"aeb6b18b-cc94-4a8a-a970-e39447726765","Type":"ContainerStarted","Data":"fef4093b39e20c5b144e63f0c9de6c21e394c6ac8c5c7e7752c3b63eeb20c865"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.333099 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" event={"ID":"ce32fab5-629a-4cb5-b5a7-c91b113ae67d","Type":"ContainerStarted","Data":"e95d324cb99647ad70d41d40b44e3459079dcb32700d728a45f13dcabc06ff63"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.333140 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pf2qm" event={"ID":"ce32fab5-629a-4cb5-b5a7-c91b113ae67d","Type":"ContainerStarted","Data":"49debc8c59affabfecd99df7ed5d4ef199e493fd1e6a7b2752abffbf2a487b72"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.350931 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8kf2" event={"ID":"f224040b-f12f-43a9-a425-f971b7f1b028","Type":"ContainerStarted","Data":"88a440b1eca5d8c4609a2bae4cbbd2712696a5947ea13f63bb54fb7fcab6df85"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.358596 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4hj68" podStartSLOduration=125.358583731 podStartE2EDuration="2m5.358583731s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:33.358080206 +0000 UTC m=+145.597947852" watchObservedRunningTime="2026-01-21 00:08:33.358583731 +0000 UTC m=+145.598451377" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.375950 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:33 crc kubenswrapper[4873]: E0121 00:08:33.376169 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.876129411 +0000 UTC m=+146.115997057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.376311 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.376380 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szff9\" (UniqueName: \"kubernetes.io/projected/53754e90-fc88-49e7-ad8e-50552d9bc151-kube-api-access-szff9\") pod \"redhat-marketplace-n2txz\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.376451 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-utilities\") pod \"redhat-marketplace-n2txz\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.376627 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-catalog-content\") pod \"redhat-marketplace-n2txz\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: E0121 00:08:33.377500 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.877485372 +0000 UTC m=+146.117353018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.414415 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mx25s" podStartSLOduration=125.414394195 podStartE2EDuration="2m5.414394195s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:33.397307899 +0000 UTC m=+145.637175555" watchObservedRunningTime="2026-01-21 00:08:33.414394195 +0000 UTC m=+145.654261831" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.420068 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9d5lf" event={"ID":"b58f6ba2-2455-47b0-82db-1b79fe74520e","Type":"ContainerStarted","Data":"d85b6c01da580b7785211762b46d3d607c9cfb3a2540fc3edd09ac61d9c1e17f"} Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.452922 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv27r" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.469962 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.490204 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.490433 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szff9\" (UniqueName: \"kubernetes.io/projected/53754e90-fc88-49e7-ad8e-50552d9bc151-kube-api-access-szff9\") pod \"redhat-marketplace-n2txz\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.490517 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-utilities\") pod \"redhat-marketplace-n2txz\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.490800 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-catalog-content\") pod \"redhat-marketplace-n2txz\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: E0121 00:08:33.491772 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:33.991755237 +0000 UTC m=+146.231622883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.493888 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-catalog-content\") pod \"redhat-marketplace-n2txz\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.496176 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-utilities\") pod \"redhat-marketplace-n2txz\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.536789 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r9ljc" podStartSLOduration=125.536772961 podStartE2EDuration="2m5.536772961s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:33.461402648 +0000 UTC m=+145.701270294" watchObservedRunningTime="2026-01-21 00:08:33.536772961 +0000 UTC m=+145.776640607" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.567991 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szff9\" (UniqueName: \"kubernetes.io/projected/53754e90-fc88-49e7-ad8e-50552d9bc151-kube-api-access-szff9\") pod \"redhat-marketplace-n2txz\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.592699 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:33 crc kubenswrapper[4873]: E0121 00:08:33.593077 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.09306576 +0000 UTC m=+146.332933406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.617965 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.694003 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:33 crc kubenswrapper[4873]: E0121 00:08:33.694364 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.194348141 +0000 UTC m=+146.434215787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.798188 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:33 crc kubenswrapper[4873]: E0121 00:08:33.798517 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.298504577 +0000 UTC m=+146.538372223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.886869 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkdsj"] Jan 21 00:08:33 crc kubenswrapper[4873]: I0121 00:08:33.898924 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:33 crc kubenswrapper[4873]: E0121 00:08:33.899261 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.399247012 +0000 UTC m=+146.639114658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.000849 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:34 crc kubenswrapper[4873]: E0121 00:08:34.001194 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.501182573 +0000 UTC m=+146.741050219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.054251 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dqn92"] Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.055322 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.067506 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.096892 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqn92"] Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.102415 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.102604 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-catalog-content\") pod \"redhat-operators-dqn92\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.102665 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsxh6\" (UniqueName: \"kubernetes.io/projected/72d11f1d-5410-4a20-9b17-2f04a831a398-kube-api-access-zsxh6\") pod \"redhat-operators-dqn92\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.102706 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-utilities\") pod \"redhat-operators-dqn92\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: E0121 00:08:34.102801 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.602785745 +0000 UTC m=+146.842653381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.143836 4873 patch_prober.go:28] interesting pod/router-default-5444994796-s5hrf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 00:08:34 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Jan 21 00:08:34 crc kubenswrapper[4873]: [+]process-running ok Jan 21 00:08:34 crc kubenswrapper[4873]: healthz check failed Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.143885 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5hrf" podUID="12926866-b4b6-4c68-b010-e30359824005" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.144628 4873 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.204740 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:34 crc kubenswrapper[4873]: E0121 00:08:34.205223 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.705205609 +0000 UTC m=+146.945073255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.205659 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-catalog-content\") pod \"redhat-operators-dqn92\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.205740 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsxh6\" (UniqueName: \"kubernetes.io/projected/72d11f1d-5410-4a20-9b17-2f04a831a398-kube-api-access-zsxh6\") pod \"redhat-operators-dqn92\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.205802 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-utilities\") pod \"redhat-operators-dqn92\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.206712 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-utilities\") pod \"redhat-operators-dqn92\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.206785 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-catalog-content\") pod \"redhat-operators-dqn92\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.243423 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2txz"] Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.258532 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsxh6\" (UniqueName: \"kubernetes.io/projected/72d11f1d-5410-4a20-9b17-2f04a831a398-kube-api-access-zsxh6\") pod \"redhat-operators-dqn92\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.307158 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:34 crc kubenswrapper[4873]: E0121 00:08:34.307344 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.807318075 +0000 UTC m=+147.047185721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.307412 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.307565 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.307641 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.307735 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.307800 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:34 crc kubenswrapper[4873]: E0121 00:08:34.308707 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.808689846 +0000 UTC m=+147.048557492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-flh6k" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.309144 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.312193 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.312320 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.312531 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.393302 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.408976 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:34 crc kubenswrapper[4873]: E0121 00:08:34.409200 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 00:08:34.909153592 +0000 UTC m=+147.149021238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.429678 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-72s9c" event={"ID":"b60f95eb-22b2-47e1-8c88-71ff19f08465","Type":"ContainerStarted","Data":"4c0dd19c69923dde4e1fdc822b75a1574bc58bd99b67b969c2bb1dbffd024414"} Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.429717 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-72s9c" event={"ID":"b60f95eb-22b2-47e1-8c88-71ff19f08465","Type":"ContainerStarted","Data":"836065480ef793ede99a95b50235cd3503f8f3b6d64bd487a6d1f0b5fd1ce73b"} Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.433611 4873 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T00:08:34.144638604Z","Handler":null,"Name":""} Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.434420 4873 generic.go:334] "Generic (PLEG): container finished" podID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerID="1f15c02cac71b277463451fd8ee9977ed936aecf484e1b6d91c11a21f0549fd3" exitCode=0 Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.434477 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbxjc" event={"ID":"c0fd5454-ac0d-4873-a5cc-d690883223a4","Type":"ContainerDied","Data":"1f15c02cac71b277463451fd8ee9977ed936aecf484e1b6d91c11a21f0549fd3"} Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.435997 4873 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.436023 4873 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.438620 4873 generic.go:334] "Generic (PLEG): container finished" podID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerID="9c56705048557fef862528366306c811c582d5f99eb95352a489a031bd2063a9" exitCode=0 Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.439352 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkdsj" event={"ID":"bca22ae0-3824-47b7-8520-f7b7f0f657ab","Type":"ContainerDied","Data":"9c56705048557fef862528366306c811c582d5f99eb95352a489a031bd2063a9"} Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.439379 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkdsj" event={"ID":"bca22ae0-3824-47b7-8520-f7b7f0f657ab","Type":"ContainerStarted","Data":"ab8b7e8da63bc309865644d22060fc071d13d151e898b6fc60f1f762d4c872f7"} Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.458694 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" event={"ID":"01bd6a42-c0f6-4193-b4d3-3366e082a486","Type":"ContainerStarted","Data":"fb4fe2b325b5eaf17735c24013b46c7d5f50bbda4a958134133c2ee1096a7a33"} Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.469044 4873 generic.go:334] "Generic (PLEG): container finished" podID="20b87797-f91d-4f1a-b0d7-febdafa8e7ba" containerID="e280aa140ecccf0a2c84d4e15273f53e55ef395fc3b3c03498cb3dfc9d9bc3e8" exitCode=0 Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.469213 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" event={"ID":"20b87797-f91d-4f1a-b0d7-febdafa8e7ba","Type":"ContainerDied","Data":"e280aa140ecccf0a2c84d4e15273f53e55ef395fc3b3c03498cb3dfc9d9bc3e8"} Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.470023 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hqslw"] Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.471059 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.471676 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2txz" event={"ID":"53754e90-fc88-49e7-ad8e-50552d9bc151","Type":"ContainerStarted","Data":"62eb832f9ba75b1d5f12d39d962c5415577e02489a0dc1f28e284adf8e9ed33f"} Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.488894 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.497061 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.522269 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.524697 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" podStartSLOduration=126.524679625 podStartE2EDuration="2m6.524679625s" podCreationTimestamp="2026-01-21 00:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:34.497407097 +0000 UTC m=+146.737274743" watchObservedRunningTime="2026-01-21 00:08:34.524679625 +0000 UTC m=+146.764547271" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.528893 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqslw"] Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.530826 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-catalog-content\") pod \"redhat-operators-hqslw\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.531263 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.531616 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2f2\" (UniqueName: \"kubernetes.io/projected/f4273d27-1592-4728-8bd2-14888045fffb-kube-api-access-vd2f2\") pod \"redhat-operators-hqslw\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.531763 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-utilities\") pod \"redhat-operators-hqslw\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.534490 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.545162 4873 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.545201 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.597117 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-flh6k\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.632668 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.632910 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd2f2\" (UniqueName: \"kubernetes.io/projected/f4273d27-1592-4728-8bd2-14888045fffb-kube-api-access-vd2f2\") pod \"redhat-operators-hqslw\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.632960 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-utilities\") pod \"redhat-operators-hqslw\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.633004 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-catalog-content\") pod \"redhat-operators-hqslw\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.633940 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-catalog-content\") pod \"redhat-operators-hqslw\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.634622 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-utilities\") pod \"redhat-operators-hqslw\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.659650 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd2f2\" (UniqueName: \"kubernetes.io/projected/f4273d27-1592-4728-8bd2-14888045fffb-kube-api-access-vd2f2\") pod \"redhat-operators-hqslw\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.695860 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.803849 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:08:34 crc kubenswrapper[4873]: W0121 00:08:34.835329 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-4c54d91c4cd60862c68551583340c5b02b9e2a93fc73faed277244ccdbd7f9d7 WatchSource:0}: Error finding container 4c54d91c4cd60862c68551583340c5b02b9e2a93fc73faed277244ccdbd7f9d7: Status 404 returned error can't find the container with id 4c54d91c4cd60862c68551583340c5b02b9e2a93fc73faed277244ccdbd7f9d7 Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.894203 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqn92"] Jan 21 00:08:34 crc kubenswrapper[4873]: I0121 00:08:34.896980 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:34 crc kubenswrapper[4873]: W0121 00:08:34.974813 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d11f1d_5410_4a20_9b17_2f04a831a398.slice/crio-feb209bcb6b979b6a0208eff184b24fe7daeb6efd441a44fa6315d72b662266b WatchSource:0}: Error finding container feb209bcb6b979b6a0208eff184b24fe7daeb6efd441a44fa6315d72b662266b: Status 404 returned error can't find the container with id feb209bcb6b979b6a0208eff184b24fe7daeb6efd441a44fa6315d72b662266b Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.140967 4873 patch_prober.go:28] interesting pod/router-default-5444994796-s5hrf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 00:08:35 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Jan 21 00:08:35 crc kubenswrapper[4873]: [+]process-running ok Jan 21 00:08:35 crc kubenswrapper[4873]: healthz check failed Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.141013 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5hrf" podUID="12926866-b4b6-4c68-b010-e30359824005" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:08:35 crc kubenswrapper[4873]: W0121 00:08:35.174060 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-59dd5e251342710680b5ebe6985622d77236b26ce19925fb2a06bb0a161442ff WatchSource:0}: Error finding container 59dd5e251342710680b5ebe6985622d77236b26ce19925fb2a06bb0a161442ff: Status 404 returned error can't find the container with id 59dd5e251342710680b5ebe6985622d77236b26ce19925fb2a06bb0a161442ff Jan 21 00:08:35 crc kubenswrapper[4873]: W0121 00:08:35.181777 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-5efa6452d33a101a370814b7be0e99e87850bca725bc8a06466d8b35d191bf37 WatchSource:0}: Error finding container 5efa6452d33a101a370814b7be0e99e87850bca725bc8a06466d8b35d191bf37: Status 404 returned error can't find the container with id 5efa6452d33a101a370814b7be0e99e87850bca725bc8a06466d8b35d191bf37 Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.197914 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqslw"] Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.248786 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.256710 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-n7fpv" Jan 21 00:08:35 crc kubenswrapper[4873]: W0121 00:08:35.318012 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4273d27_1592_4728_8bd2_14888045fffb.slice/crio-c51af64bb98a0f535d3b0e3cb65ad1d8d788fc926cb31e116e7447136653e371 WatchSource:0}: Error finding container c51af64bb98a0f535d3b0e3cb65ad1d8d788fc926cb31e116e7447136653e371: Status 404 returned error can't find the container with id c51af64bb98a0f535d3b0e3cb65ad1d8d788fc926cb31e116e7447136653e371 Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.529920 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3c38128088542af2d69d06076387c3dca6195840178bd869d3cbe5f37924939b"} Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.529964 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"59dd5e251342710680b5ebe6985622d77236b26ce19925fb2a06bb0a161442ff"} Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.530185 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.532436 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-flh6k"] Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.549192 4873 generic.go:334] "Generic (PLEG): container finished" podID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerID="13fd1578f875694bff8e22b92bd592bcb6eed78891a1b953e82bbe38eaae6760" exitCode=0 Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.549390 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2txz" event={"ID":"53754e90-fc88-49e7-ad8e-50552d9bc151","Type":"ContainerDied","Data":"13fd1578f875694bff8e22b92bd592bcb6eed78891a1b953e82bbe38eaae6760"} Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.562781 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqn92" event={"ID":"72d11f1d-5410-4a20-9b17-2f04a831a398","Type":"ContainerStarted","Data":"cff64af092d45f2995740af3715b791b7e9a6c27830eb333a043878790f1520a"} Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.562821 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqn92" event={"ID":"72d11f1d-5410-4a20-9b17-2f04a831a398","Type":"ContainerStarted","Data":"feb209bcb6b979b6a0208eff184b24fe7daeb6efd441a44fa6315d72b662266b"} Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.577372 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-72s9c" event={"ID":"b60f95eb-22b2-47e1-8c88-71ff19f08465","Type":"ContainerStarted","Data":"236d2e71379b99a7917e2d9f9cb9fe14d71cb6e3f82904de74626ff0348a8e1c"} Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.601802 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c26924bb807d5ec05632ed8eb828762a53b2e84289cd5967dd46dea7cb701310"} Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.601850 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4c54d91c4cd60862c68551583340c5b02b9e2a93fc73faed277244ccdbd7f9d7"} Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.605771 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-72s9c" podStartSLOduration=11.605761221 podStartE2EDuration="11.605761221s" podCreationTimestamp="2026-01-21 00:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:35.603206165 +0000 UTC m=+147.843073811" watchObservedRunningTime="2026-01-21 00:08:35.605761221 +0000 UTC m=+147.845628867" Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.612909 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqslw" event={"ID":"f4273d27-1592-4728-8bd2-14888045fffb","Type":"ContainerStarted","Data":"c51af64bb98a0f535d3b0e3cb65ad1d8d788fc926cb31e116e7447136653e371"} Jan 21 00:08:35 crc kubenswrapper[4873]: I0121 00:08:35.620106 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5efa6452d33a101a370814b7be0e99e87850bca725bc8a06466d8b35d191bf37"} Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.107715 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.172689 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.174020 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.183265 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-s5hrf" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.342505 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gjlf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.342795 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9gjlf" podUID="110863cb-5af1-4e46-9e40-9831dfa25875" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.342572 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9gjlf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.343161 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9gjlf" podUID="110863cb-5af1-4e46-9e40-9831dfa25875" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.353642 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.371633 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.371671 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.373703 4873 patch_prober.go:28] interesting pod/console-f9d7485db-zffj8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.373748 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zffj8" podUID="4e56ba3a-006a-4ae3-ae87-5c9babf78867" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.493657 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-config-volume\") pod \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.493714 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvb7h\" (UniqueName: \"kubernetes.io/projected/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-kube-api-access-kvb7h\") pod \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.493848 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-secret-volume\") pod \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\" (UID: \"20b87797-f91d-4f1a-b0d7-febdafa8e7ba\") " Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.496111 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "20b87797-f91d-4f1a-b0d7-febdafa8e7ba" (UID: "20b87797-f91d-4f1a-b0d7-febdafa8e7ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.502149 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-kube-api-access-kvb7h" (OuterVolumeSpecName: "kube-api-access-kvb7h") pod "20b87797-f91d-4f1a-b0d7-febdafa8e7ba" (UID: "20b87797-f91d-4f1a-b0d7-febdafa8e7ba"). InnerVolumeSpecName "kube-api-access-kvb7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.503724 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20b87797-f91d-4f1a-b0d7-febdafa8e7ba" (UID: "20b87797-f91d-4f1a-b0d7-febdafa8e7ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.595247 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.595287 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.595296 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvb7h\" (UniqueName: \"kubernetes.io/projected/20b87797-f91d-4f1a-b0d7-febdafa8e7ba-kube-api-access-kvb7h\") on node \"crc\" DevicePath \"\"" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.623612 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4273d27-1592-4728-8bd2-14888045fffb" containerID="bb648a0f76631b0cd144e4a32fd789f7d0fe391a3b6fa720c26e13895bc5fabe" exitCode=0 Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.624483 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqslw" event={"ID":"f4273d27-1592-4728-8bd2-14888045fffb","Type":"ContainerDied","Data":"bb648a0f76631b0cd144e4a32fd789f7d0fe391a3b6fa720c26e13895bc5fabe"} Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.639047 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" event={"ID":"20b87797-f91d-4f1a-b0d7-febdafa8e7ba","Type":"ContainerDied","Data":"3c05f627b89df706c21431a67d1bde0de9a5b9eaa8faa61fb0bc47c183731089"} Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.639084 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c05f627b89df706c21431a67d1bde0de9a5b9eaa8faa61fb0bc47c183731089" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.639131 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm" Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.661611 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0b947daaefa647adb29bedfcda646ff3a661b7cf9e4d49bff60bb92786356cca"} Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.688644 4873 generic.go:334] "Generic (PLEG): container finished" podID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerID="cff64af092d45f2995740af3715b791b7e9a6c27830eb333a043878790f1520a" exitCode=0 Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.688723 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqn92" event={"ID":"72d11f1d-5410-4a20-9b17-2f04a831a398","Type":"ContainerDied","Data":"cff64af092d45f2995740af3715b791b7e9a6c27830eb333a043878790f1520a"} Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.714293 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" event={"ID":"530a993a-eb48-4622-abec-7f3af78b3c40","Type":"ContainerStarted","Data":"e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9"} Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.714341 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" event={"ID":"530a993a-eb48-4622-abec-7f3af78b3c40","Type":"ContainerStarted","Data":"4161fba30e621dee3504820abcc51cf0674e7cf6ec3f2c21ccc0675bbfe28ace"} Jan 21 00:08:36 crc kubenswrapper[4873]: I0121 00:08:36.745145 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" podStartSLOduration=129.745131134 podStartE2EDuration="2m9.745131134s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:36.744533596 +0000 UTC m=+148.984401242" watchObservedRunningTime="2026-01-21 00:08:36.745131134 +0000 UTC m=+148.984998780" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.010503 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 00:08:37 crc kubenswrapper[4873]: E0121 00:08:37.010758 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b87797-f91d-4f1a-b0d7-febdafa8e7ba" containerName="collect-profiles" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.010772 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b87797-f91d-4f1a-b0d7-febdafa8e7ba" containerName="collect-profiles" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.010870 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b87797-f91d-4f1a-b0d7-febdafa8e7ba" containerName="collect-profiles" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.011251 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.013636 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.019393 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.045745 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.114830 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbd178ff-55d3-4897-9990-4b4953cf9160-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cbd178ff-55d3-4897-9990-4b4953cf9160\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.114881 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbd178ff-55d3-4897-9990-4b4953cf9160-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cbd178ff-55d3-4897-9990-4b4953cf9160\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.216193 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbd178ff-55d3-4897-9990-4b4953cf9160-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cbd178ff-55d3-4897-9990-4b4953cf9160\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.216255 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbd178ff-55d3-4897-9990-4b4953cf9160-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cbd178ff-55d3-4897-9990-4b4953cf9160\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.216344 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbd178ff-55d3-4897-9990-4b4953cf9160-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cbd178ff-55d3-4897-9990-4b4953cf9160\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.256579 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbd178ff-55d3-4897-9990-4b4953cf9160-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cbd178ff-55d3-4897-9990-4b4953cf9160\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.340188 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.468766 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.468816 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.487699 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.721506 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.734378 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf68l" Jan 21 00:08:37 crc kubenswrapper[4873]: I0121 00:08:37.784598 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 00:08:38 crc kubenswrapper[4873]: I0121 00:08:38.763505 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cbd178ff-55d3-4897-9990-4b4953cf9160","Type":"ContainerStarted","Data":"bf72211bbad21328c510a4d013fb3c5e0635f35a01d4ae99069cdad5b5869e15"} Jan 21 00:08:38 crc kubenswrapper[4873]: I0121 00:08:38.763794 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cbd178ff-55d3-4897-9990-4b4953cf9160","Type":"ContainerStarted","Data":"ada8671a94f88eeaba9e5cdb0b0d9669e433f080759b176143a25ea735765196"} Jan 21 00:08:38 crc kubenswrapper[4873]: I0121 00:08:38.793447 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.7934219310000001 podStartE2EDuration="1.793421931s" podCreationTimestamp="2026-01-21 00:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:08:38.786867497 +0000 UTC m=+151.026735143" watchObservedRunningTime="2026-01-21 00:08:38.793421931 +0000 UTC m=+151.033289577" Jan 21 00:08:39 crc kubenswrapper[4873]: I0121 00:08:39.821213 4873 generic.go:334] "Generic (PLEG): container finished" podID="cbd178ff-55d3-4897-9990-4b4953cf9160" containerID="bf72211bbad21328c510a4d013fb3c5e0635f35a01d4ae99069cdad5b5869e15" exitCode=0 Jan 21 00:08:39 crc kubenswrapper[4873]: I0121 00:08:39.821439 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cbd178ff-55d3-4897-9990-4b4953cf9160","Type":"ContainerDied","Data":"bf72211bbad21328c510a4d013fb3c5e0635f35a01d4ae99069cdad5b5869e15"} Jan 21 00:08:41 crc kubenswrapper[4873]: I0121 00:08:41.247903 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 00:08:41 crc kubenswrapper[4873]: I0121 00:08:41.331076 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbd178ff-55d3-4897-9990-4b4953cf9160-kube-api-access\") pod \"cbd178ff-55d3-4897-9990-4b4953cf9160\" (UID: \"cbd178ff-55d3-4897-9990-4b4953cf9160\") " Jan 21 00:08:41 crc kubenswrapper[4873]: I0121 00:08:41.331209 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbd178ff-55d3-4897-9990-4b4953cf9160-kubelet-dir\") pod \"cbd178ff-55d3-4897-9990-4b4953cf9160\" (UID: \"cbd178ff-55d3-4897-9990-4b4953cf9160\") " Jan 21 00:08:41 crc kubenswrapper[4873]: I0121 00:08:41.331516 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbd178ff-55d3-4897-9990-4b4953cf9160-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cbd178ff-55d3-4897-9990-4b4953cf9160" (UID: "cbd178ff-55d3-4897-9990-4b4953cf9160"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:08:41 crc kubenswrapper[4873]: I0121 00:08:41.357283 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd178ff-55d3-4897-9990-4b4953cf9160-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cbd178ff-55d3-4897-9990-4b4953cf9160" (UID: "cbd178ff-55d3-4897-9990-4b4953cf9160"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:08:41 crc kubenswrapper[4873]: I0121 00:08:41.432883 4873 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbd178ff-55d3-4897-9990-4b4953cf9160-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:08:41 crc kubenswrapper[4873]: I0121 00:08:41.432917 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbd178ff-55d3-4897-9990-4b4953cf9160-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:08:41 crc kubenswrapper[4873]: I0121 00:08:41.871789 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cbd178ff-55d3-4897-9990-4b4953cf9160","Type":"ContainerDied","Data":"ada8671a94f88eeaba9e5cdb0b0d9669e433f080759b176143a25ea735765196"} Jan 21 00:08:41 crc kubenswrapper[4873]: I0121 00:08:41.871842 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ada8671a94f88eeaba9e5cdb0b0d9669e433f080759b176143a25ea735765196" Jan 21 00:08:41 crc kubenswrapper[4873]: I0121 00:08:41.871916 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 00:08:42 crc kubenswrapper[4873]: I0121 00:08:42.559135 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-q2stp" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.240101 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 00:08:43 crc kubenswrapper[4873]: E0121 00:08:43.240313 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd178ff-55d3-4897-9990-4b4953cf9160" containerName="pruner" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.240325 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd178ff-55d3-4897-9990-4b4953cf9160" containerName="pruner" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.240447 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd178ff-55d3-4897-9990-4b4953cf9160" containerName="pruner" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.240849 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.243401 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.249115 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.250060 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.370292 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.370440 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.472198 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.472307 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.472404 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.516042 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 00:08:43 crc kubenswrapper[4873]: I0121 00:08:43.585941 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 00:08:46 crc kubenswrapper[4873]: I0121 00:08:46.348038 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9gjlf" Jan 21 00:08:46 crc kubenswrapper[4873]: I0121 00:08:46.407009 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:46 crc kubenswrapper[4873]: I0121 00:08:46.412324 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zffj8" Jan 21 00:08:50 crc kubenswrapper[4873]: I0121 00:08:50.192988 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7tbgp"] Jan 21 00:08:50 crc kubenswrapper[4873]: I0121 00:08:50.193255 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb"] Jan 21 00:08:50 crc kubenswrapper[4873]: I0121 00:08:50.193407 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" podUID="efa5d9f4-85b1-4b35-9e6c-9c463462104f" containerName="route-controller-manager" containerID="cri-o://cac87290b7ca47aea50cdc746fc38a9ea690748574c758f606a6826170d4cbdd" gracePeriod=30 Jan 21 00:08:50 crc kubenswrapper[4873]: I0121 00:08:50.193632 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" podUID="d8c5454c-78f0-4b3e-a5d8-50319b08070d" containerName="controller-manager" containerID="cri-o://a9c32412789155d4d3bacab94807fce448cc1d402b9db0a4ce68830ef5b82eff" gracePeriod=30 Jan 21 00:08:50 crc kubenswrapper[4873]: I0121 00:08:50.806073 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:50 crc kubenswrapper[4873]: I0121 00:08:50.811747 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7f7e62f-ce78-4588-994f-8ab17d7821d1-metrics-certs\") pod \"network-metrics-daemon-mx2js\" (UID: \"c7f7e62f-ce78-4588-994f-8ab17d7821d1\") " pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:51 crc kubenswrapper[4873]: I0121 00:08:51.036976 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx2js" Jan 21 00:08:52 crc kubenswrapper[4873]: I0121 00:08:52.031948 4873 generic.go:334] "Generic (PLEG): container finished" podID="efa5d9f4-85b1-4b35-9e6c-9c463462104f" containerID="cac87290b7ca47aea50cdc746fc38a9ea690748574c758f606a6826170d4cbdd" exitCode=0 Jan 21 00:08:52 crc kubenswrapper[4873]: I0121 00:08:52.032001 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" event={"ID":"efa5d9f4-85b1-4b35-9e6c-9c463462104f","Type":"ContainerDied","Data":"cac87290b7ca47aea50cdc746fc38a9ea690748574c758f606a6826170d4cbdd"} Jan 21 00:08:53 crc kubenswrapper[4873]: I0121 00:08:53.040088 4873 generic.go:334] "Generic (PLEG): container finished" podID="d8c5454c-78f0-4b3e-a5d8-50319b08070d" containerID="a9c32412789155d4d3bacab94807fce448cc1d402b9db0a4ce68830ef5b82eff" exitCode=0 Jan 21 00:08:53 crc kubenswrapper[4873]: I0121 00:08:53.040130 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" event={"ID":"d8c5454c-78f0-4b3e-a5d8-50319b08070d","Type":"ContainerDied","Data":"a9c32412789155d4d3bacab94807fce448cc1d402b9db0a4ce68830ef5b82eff"} Jan 21 00:08:54 crc kubenswrapper[4873]: I0121 00:08:54.903595 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:08:56 crc kubenswrapper[4873]: I0121 00:08:56.710084 4873 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7tbgp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 21 00:08:56 crc kubenswrapper[4873]: I0121 00:08:56.710392 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" podUID="d8c5454c-78f0-4b3e-a5d8-50319b08070d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 21 00:08:57 crc kubenswrapper[4873]: I0121 00:08:57.349190 4873 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nrngb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 00:08:57 crc kubenswrapper[4873]: I0121 00:08:57.349283 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" podUID="efa5d9f4-85b1-4b35-9e6c-9c463462104f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 00:09:01 crc kubenswrapper[4873]: I0121 00:09:01.631780 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:09:01 crc kubenswrapper[4873]: I0121 00:09:01.632417 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:09:02 crc kubenswrapper[4873]: I0121 00:09:02.146438 4873 generic.go:334] "Generic (PLEG): container finished" podID="8e08f95c-603b-4967-a545-b4cc31eeca6d" containerID="903774dedd0e6d5339b54a70ee44ffa1b6aa2192c3974d07a6c32ca5364fa17f" exitCode=0 Jan 21 00:09:02 crc kubenswrapper[4873]: I0121 00:09:02.146483 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29482560-5mcvl" event={"ID":"8e08f95c-603b-4967-a545-b4cc31eeca6d","Type":"ContainerDied","Data":"903774dedd0e6d5339b54a70ee44ffa1b6aa2192c3974d07a6c32ca5364fa17f"} Jan 21 00:09:06 crc kubenswrapper[4873]: I0121 00:09:06.710793 4873 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7tbgp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 21 00:09:06 crc kubenswrapper[4873]: I0121 00:09:06.711181 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" podUID="d8c5454c-78f0-4b3e-a5d8-50319b08070d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 21 00:09:06 crc kubenswrapper[4873]: I0121 00:09:06.790602 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5jfsf" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.349677 4873 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nrngb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.349760 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" podUID="efa5d9f4-85b1-4b35-9e6c-9c463462104f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.871612 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.930685 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv"] Jan 21 00:09:07 crc kubenswrapper[4873]: E0121 00:09:07.930893 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa5d9f4-85b1-4b35-9e6c-9c463462104f" containerName="route-controller-manager" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.930905 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa5d9f4-85b1-4b35-9e6c-9c463462104f" containerName="route-controller-manager" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.931012 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa5d9f4-85b1-4b35-9e6c-9c463462104f" containerName="route-controller-manager" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.931522 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.946128 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv"] Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.975869 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt685\" (UniqueName: \"kubernetes.io/projected/efa5d9f4-85b1-4b35-9e6c-9c463462104f-kube-api-access-kt685\") pod \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.976058 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-client-ca\") pod \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.976830 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-client-ca" (OuterVolumeSpecName: "client-ca") pod "efa5d9f4-85b1-4b35-9e6c-9c463462104f" (UID: "efa5d9f4-85b1-4b35-9e6c-9c463462104f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.976915 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-config\") pod \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.977392 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-config" (OuterVolumeSpecName: "config") pod "efa5d9f4-85b1-4b35-9e6c-9c463462104f" (UID: "efa5d9f4-85b1-4b35-9e6c-9c463462104f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.977465 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa5d9f4-85b1-4b35-9e6c-9c463462104f-serving-cert\") pod \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\" (UID: \"efa5d9f4-85b1-4b35-9e6c-9c463462104f\") " Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.977749 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.977775 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efa5d9f4-85b1-4b35-9e6c-9c463462104f-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:07 crc kubenswrapper[4873]: I0121 00:09:07.995797 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa5d9f4-85b1-4b35-9e6c-9c463462104f-kube-api-access-kt685" (OuterVolumeSpecName: "kube-api-access-kt685") pod "efa5d9f4-85b1-4b35-9e6c-9c463462104f" (UID: "efa5d9f4-85b1-4b35-9e6c-9c463462104f"). InnerVolumeSpecName "kube-api-access-kt685". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.008711 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa5d9f4-85b1-4b35-9e6c-9c463462104f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "efa5d9f4-85b1-4b35-9e6c-9c463462104f" (UID: "efa5d9f4-85b1-4b35-9e6c-9c463462104f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.078791 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8ns\" (UniqueName: \"kubernetes.io/projected/10e2de2d-777c-4e7a-959c-70ed5c3f456d-kube-api-access-2d8ns\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.079135 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-client-ca\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.079164 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e2de2d-777c-4e7a-959c-70ed5c3f456d-serving-cert\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.079230 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-config\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.079276 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa5d9f4-85b1-4b35-9e6c-9c463462104f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.079290 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt685\" (UniqueName: \"kubernetes.io/projected/efa5d9f4-85b1-4b35-9e6c-9c463462104f-kube-api-access-kt685\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.180198 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-config\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.180277 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8ns\" (UniqueName: \"kubernetes.io/projected/10e2de2d-777c-4e7a-959c-70ed5c3f456d-kube-api-access-2d8ns\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.180321 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-client-ca\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.180351 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e2de2d-777c-4e7a-959c-70ed5c3f456d-serving-cert\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.182293 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-client-ca\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.182372 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-config\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.189292 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e2de2d-777c-4e7a-959c-70ed5c3f456d-serving-cert\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.197730 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8ns\" (UniqueName: \"kubernetes.io/projected/10e2de2d-777c-4e7a-959c-70ed5c3f456d-kube-api-access-2d8ns\") pod \"route-controller-manager-5d8b8d4467-2vnrv\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.209664 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" event={"ID":"efa5d9f4-85b1-4b35-9e6c-9c463462104f","Type":"ContainerDied","Data":"1c540e7f3492c8441e50e02df15c5f808f5a2208a7e1705a0e7ca0130a2b861a"} Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.209717 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.209973 4873 scope.go:117] "RemoveContainer" containerID="cac87290b7ca47aea50cdc746fc38a9ea690748574c758f606a6826170d4cbdd" Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.229002 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb"] Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.233017 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrngb"] Jan 21 00:09:08 crc kubenswrapper[4873]: I0121 00:09:08.251214 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:09 crc kubenswrapper[4873]: I0121 00:09:09.706180 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29482560-5mcvl" Jan 21 00:09:09 crc kubenswrapper[4873]: E0121 00:09:09.741667 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 00:09:09 crc kubenswrapper[4873]: E0121 00:09:09.741859 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nx59p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qjfd5_openshift-marketplace(c2aa2871-6143-459b-9607-1fdde2f2f22c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 00:09:09 crc kubenswrapper[4873]: E0121 00:09:09.743351 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qjfd5" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" Jan 21 00:09:09 crc kubenswrapper[4873]: I0121 00:09:09.834705 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e08f95c-603b-4967-a545-b4cc31eeca6d-serviceca\") pod \"8e08f95c-603b-4967-a545-b4cc31eeca6d\" (UID: \"8e08f95c-603b-4967-a545-b4cc31eeca6d\") " Jan 21 00:09:09 crc kubenswrapper[4873]: I0121 00:09:09.834762 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6krt\" (UniqueName: \"kubernetes.io/projected/8e08f95c-603b-4967-a545-b4cc31eeca6d-kube-api-access-r6krt\") pod \"8e08f95c-603b-4967-a545-b4cc31eeca6d\" (UID: \"8e08f95c-603b-4967-a545-b4cc31eeca6d\") " Jan 21 00:09:09 crc kubenswrapper[4873]: I0121 00:09:09.835621 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e08f95c-603b-4967-a545-b4cc31eeca6d-serviceca" (OuterVolumeSpecName: "serviceca") pod "8e08f95c-603b-4967-a545-b4cc31eeca6d" (UID: "8e08f95c-603b-4967-a545-b4cc31eeca6d"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:09 crc kubenswrapper[4873]: I0121 00:09:09.848896 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e08f95c-603b-4967-a545-b4cc31eeca6d-kube-api-access-r6krt" (OuterVolumeSpecName: "kube-api-access-r6krt") pod "8e08f95c-603b-4967-a545-b4cc31eeca6d" (UID: "8e08f95c-603b-4967-a545-b4cc31eeca6d"). InnerVolumeSpecName "kube-api-access-r6krt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:09 crc kubenswrapper[4873]: I0121 00:09:09.935750 4873 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e08f95c-603b-4967-a545-b4cc31eeca6d-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:09 crc kubenswrapper[4873]: I0121 00:09:09.935775 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6krt\" (UniqueName: \"kubernetes.io/projected/8e08f95c-603b-4967-a545-b4cc31eeca6d-kube-api-access-r6krt\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:10 crc kubenswrapper[4873]: I0121 00:09:10.075447 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa5d9f4-85b1-4b35-9e6c-9c463462104f" path="/var/lib/kubelet/pods/efa5d9f4-85b1-4b35-9e6c-9c463462104f/volumes" Jan 21 00:09:10 crc kubenswrapper[4873]: I0121 00:09:10.216355 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv"] Jan 21 00:09:10 crc kubenswrapper[4873]: I0121 00:09:10.222497 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29482560-5mcvl" Jan 21 00:09:10 crc kubenswrapper[4873]: I0121 00:09:10.223219 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29482560-5mcvl" event={"ID":"8e08f95c-603b-4967-a545-b4cc31eeca6d","Type":"ContainerDied","Data":"90e1a61c3bd9426265ff7c2d7ebf3e2adf4d4fcef86259ea629f913267e7eee4"} Jan 21 00:09:10 crc kubenswrapper[4873]: I0121 00:09:10.223296 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e1a61c3bd9426265ff7c2d7ebf3e2adf4d4fcef86259ea629f913267e7eee4" Jan 21 00:09:11 crc kubenswrapper[4873]: E0121 00:09:11.255236 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qjfd5" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" Jan 21 00:09:11 crc kubenswrapper[4873]: E0121 00:09:11.318960 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 00:09:11 crc kubenswrapper[4873]: E0121 00:09:11.319107 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cz6cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vbxjc_openshift-marketplace(c0fd5454-ac0d-4873-a5cc-d690883223a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 00:09:11 crc kubenswrapper[4873]: E0121 00:09:11.320570 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vbxjc" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" Jan 21 00:09:14 crc kubenswrapper[4873]: I0121 00:09:14.538580 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 00:09:15 crc kubenswrapper[4873]: E0121 00:09:15.007525 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vbxjc" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" Jan 21 00:09:15 crc kubenswrapper[4873]: E0121 00:09:15.345620 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 00:09:15 crc kubenswrapper[4873]: E0121 00:09:15.345796 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsxh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dqn92_openshift-marketplace(72d11f1d-5410-4a20-9b17-2f04a831a398): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 00:09:15 crc kubenswrapper[4873]: E0121 00:09:15.346978 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dqn92" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" Jan 21 00:09:15 crc kubenswrapper[4873]: E0121 00:09:15.555029 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 00:09:15 crc kubenswrapper[4873]: E0121 00:09:15.555164 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vd2f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hqslw_openshift-marketplace(f4273d27-1592-4728-8bd2-14888045fffb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 00:09:15 crc kubenswrapper[4873]: E0121 00:09:15.556331 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hqslw" podUID="f4273d27-1592-4728-8bd2-14888045fffb" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.077186 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 00:09:16 crc kubenswrapper[4873]: E0121 00:09:16.077410 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e08f95c-603b-4967-a545-b4cc31eeca6d" containerName="image-pruner" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.077422 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e08f95c-603b-4967-a545-b4cc31eeca6d" containerName="image-pruner" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.077525 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e08f95c-603b-4967-a545-b4cc31eeca6d" containerName="image-pruner" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.077935 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.084719 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.120075 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9c715fa4-49ca-4772-bb7d-11c742f07ec6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.120376 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9c715fa4-49ca-4772-bb7d-11c742f07ec6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.221982 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9c715fa4-49ca-4772-bb7d-11c742f07ec6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.222022 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9c715fa4-49ca-4772-bb7d-11c742f07ec6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.222149 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9c715fa4-49ca-4772-bb7d-11c742f07ec6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.248428 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9c715fa4-49ca-4772-bb7d-11c742f07ec6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 00:09:16 crc kubenswrapper[4873]: I0121 00:09:16.414191 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 00:09:17 crc kubenswrapper[4873]: E0121 00:09:17.112147 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dqn92" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" Jan 21 00:09:17 crc kubenswrapper[4873]: E0121 00:09:17.112232 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hqslw" podUID="f4273d27-1592-4728-8bd2-14888045fffb" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.178887 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.206030 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64f47f75c7-8q5nq"] Jan 21 00:09:17 crc kubenswrapper[4873]: E0121 00:09:17.206243 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c5454c-78f0-4b3e-a5d8-50319b08070d" containerName="controller-manager" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.206258 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c5454c-78f0-4b3e-a5d8-50319b08070d" containerName="controller-manager" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.206371 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c5454c-78f0-4b3e-a5d8-50319b08070d" containerName="controller-manager" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.206729 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.214965 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64f47f75c7-8q5nq"] Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.234747 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c5454c-78f0-4b3e-a5d8-50319b08070d-serving-cert\") pod \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.234802 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94wwf\" (UniqueName: \"kubernetes.io/projected/d8c5454c-78f0-4b3e-a5d8-50319b08070d-kube-api-access-94wwf\") pod \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.234839 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-config\") pod \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.234860 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-proxy-ca-bundles\") pod \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.234880 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-client-ca\") pod \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\" (UID: \"d8c5454c-78f0-4b3e-a5d8-50319b08070d\") " Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.235013 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-client-ca\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.235044 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4246056-62cc-4225-a780-c77cf7bb1949-serving-cert\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.235320 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcprg\" (UniqueName: \"kubernetes.io/projected/b4246056-62cc-4225-a780-c77cf7bb1949-kube-api-access-xcprg\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.235396 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-config\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.235441 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-proxy-ca-bundles\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.235716 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d8c5454c-78f0-4b3e-a5d8-50319b08070d" (UID: "d8c5454c-78f0-4b3e-a5d8-50319b08070d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.235861 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-client-ca" (OuterVolumeSpecName: "client-ca") pod "d8c5454c-78f0-4b3e-a5d8-50319b08070d" (UID: "d8c5454c-78f0-4b3e-a5d8-50319b08070d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.235927 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-config" (OuterVolumeSpecName: "config") pod "d8c5454c-78f0-4b3e-a5d8-50319b08070d" (UID: "d8c5454c-78f0-4b3e-a5d8-50319b08070d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.240156 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c5454c-78f0-4b3e-a5d8-50319b08070d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d8c5454c-78f0-4b3e-a5d8-50319b08070d" (UID: "d8c5454c-78f0-4b3e-a5d8-50319b08070d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.240476 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c5454c-78f0-4b3e-a5d8-50319b08070d-kube-api-access-94wwf" (OuterVolumeSpecName: "kube-api-access-94wwf") pod "d8c5454c-78f0-4b3e-a5d8-50319b08070d" (UID: "d8c5454c-78f0-4b3e-a5d8-50319b08070d"). InnerVolumeSpecName "kube-api-access-94wwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.261220 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" event={"ID":"d8c5454c-78f0-4b3e-a5d8-50319b08070d","Type":"ContainerDied","Data":"da31a528c4c93902f9430077dac6920b79a49061fcb31733d7e3c89bd61c806c"} Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.261295 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.290532 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7tbgp"] Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.293953 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7tbgp"] Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.336792 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcprg\" (UniqueName: \"kubernetes.io/projected/b4246056-62cc-4225-a780-c77cf7bb1949-kube-api-access-xcprg\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.336845 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-config\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.336871 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-proxy-ca-bundles\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.336935 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-client-ca\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.336962 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4246056-62cc-4225-a780-c77cf7bb1949-serving-cert\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.337010 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c5454c-78f0-4b3e-a5d8-50319b08070d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.337024 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94wwf\" (UniqueName: \"kubernetes.io/projected/d8c5454c-78f0-4b3e-a5d8-50319b08070d-kube-api-access-94wwf\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.337039 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.337050 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.337061 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c5454c-78f0-4b3e-a5d8-50319b08070d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.338764 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-config\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.339174 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-proxy-ca-bundles\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.339686 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-client-ca\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.341020 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4246056-62cc-4225-a780-c77cf7bb1949-serving-cert\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.355431 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcprg\" (UniqueName: \"kubernetes.io/projected/b4246056-62cc-4225-a780-c77cf7bb1949-kube-api-access-xcprg\") pod \"controller-manager-64f47f75c7-8q5nq\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.528017 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.710622 4873 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7tbgp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: i/o timeout" start-of-body= Jan 21 00:09:17 crc kubenswrapper[4873]: I0121 00:09:17.710702 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7tbgp" podUID="d8c5454c-78f0-4b3e-a5d8-50319b08070d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: i/o timeout" Jan 21 00:09:18 crc kubenswrapper[4873]: I0121 00:09:18.072604 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c5454c-78f0-4b3e-a5d8-50319b08070d" path="/var/lib/kubelet/pods/d8c5454c-78f0-4b3e-a5d8-50319b08070d/volumes" Jan 21 00:09:18 crc kubenswrapper[4873]: I0121 00:09:18.299373 4873 scope.go:117] "RemoveContainer" containerID="a9c32412789155d4d3bacab94807fce448cc1d402b9db0a4ce68830ef5b82eff" Jan 21 00:09:18 crc kubenswrapper[4873]: E0121 00:09:18.366264 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 00:09:18 crc kubenswrapper[4873]: E0121 00:09:18.366724 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szff9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-n2txz_openshift-marketplace(53754e90-fc88-49e7-ad8e-50552d9bc151): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 00:09:18 crc kubenswrapper[4873]: E0121 00:09:18.369567 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-n2txz" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" Jan 21 00:09:18 crc kubenswrapper[4873]: E0121 00:09:18.446953 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 00:09:18 crc kubenswrapper[4873]: E0121 00:09:18.447140 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqlh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hkdsj_openshift-marketplace(bca22ae0-3824-47b7-8520-f7b7f0f657ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 00:09:18 crc kubenswrapper[4873]: E0121 00:09:18.448506 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hkdsj" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" Jan 21 00:09:18 crc kubenswrapper[4873]: E0121 00:09:18.562808 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 00:09:18 crc kubenswrapper[4873]: E0121 00:09:18.563416 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwmt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xv5c9_openshift-marketplace(e16d4483-cf0d-4977-bad2-ed10f6dde4c7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 00:09:18 crc kubenswrapper[4873]: E0121 00:09:18.564677 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xv5c9" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" Jan 21 00:09:18 crc kubenswrapper[4873]: I0121 00:09:18.599491 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mx2js"] Jan 21 00:09:18 crc kubenswrapper[4873]: W0121 00:09:18.607451 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7f7e62f_ce78_4588_994f_8ab17d7821d1.slice/crio-0cd0cec2d0671947f82b490709913059ed4e5fb8eaa61bd9e6d9e0d51ad9e305 WatchSource:0}: Error finding container 0cd0cec2d0671947f82b490709913059ed4e5fb8eaa61bd9e6d9e0d51ad9e305: Status 404 returned error can't find the container with id 0cd0cec2d0671947f82b490709913059ed4e5fb8eaa61bd9e6d9e0d51ad9e305 Jan 21 00:09:18 crc kubenswrapper[4873]: I0121 00:09:18.751775 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv"] Jan 21 00:09:18 crc kubenswrapper[4873]: I0121 00:09:18.754518 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 00:09:18 crc kubenswrapper[4873]: W0121 00:09:18.764271 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e2de2d_777c_4e7a_959c_70ed5c3f456d.slice/crio-ad99138b9dc8ed260db6d1b951f5ac8f52426ad5671405828c41379c50849ec2 WatchSource:0}: Error finding container ad99138b9dc8ed260db6d1b951f5ac8f52426ad5671405828c41379c50849ec2: Status 404 returned error can't find the container with id ad99138b9dc8ed260db6d1b951f5ac8f52426ad5671405828c41379c50849ec2 Jan 21 00:09:18 crc kubenswrapper[4873]: W0121 00:09:18.765746 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podad1cbce2_d6c1_4a96_9581_e5b609be2fcf.slice/crio-5ba1f84fbab2201908d8d1f419b8c26dc08167b77160ee7b50fa376e55e58c14 WatchSource:0}: Error finding container 5ba1f84fbab2201908d8d1f419b8c26dc08167b77160ee7b50fa376e55e58c14: Status 404 returned error can't find the container with id 5ba1f84fbab2201908d8d1f419b8c26dc08167b77160ee7b50fa376e55e58c14 Jan 21 00:09:18 crc kubenswrapper[4873]: I0121 00:09:18.853572 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 00:09:18 crc kubenswrapper[4873]: I0121 00:09:18.859080 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64f47f75c7-8q5nq"] Jan 21 00:09:18 crc kubenswrapper[4873]: W0121 00:09:18.887652 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4246056_62cc_4225_a780_c77cf7bb1949.slice/crio-645700e9840d03a6498df4c547dda7257e9514fb9097892dd769b625fc198134 WatchSource:0}: Error finding container 645700e9840d03a6498df4c547dda7257e9514fb9097892dd769b625fc198134: Status 404 returned error can't find the container with id 645700e9840d03a6498df4c547dda7257e9514fb9097892dd769b625fc198134 Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.274824 4873 generic.go:334] "Generic (PLEG): container finished" podID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerID="cb519e2f5362ef9b9b73fa4f8dd3f9991d7ffee3430eef4489b1a9161a0316bf" exitCode=0 Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.274906 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q5pv" event={"ID":"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc","Type":"ContainerDied","Data":"cb519e2f5362ef9b9b73fa4f8dd3f9991d7ffee3430eef4489b1a9161a0316bf"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.276674 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf","Type":"ContainerStarted","Data":"fec56de7a385c984c43fc3b709d2a567dafa7bef15115f0609ef4a4dc503f91f"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.276701 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf","Type":"ContainerStarted","Data":"5ba1f84fbab2201908d8d1f419b8c26dc08167b77160ee7b50fa376e55e58c14"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.281607 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" event={"ID":"b4246056-62cc-4225-a780-c77cf7bb1949","Type":"ContainerStarted","Data":"30668052e319b98a7e984defcad85c90da00e29d474b1b75b2145bfcc0a83d7d"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.282048 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" event={"ID":"b4246056-62cc-4225-a780-c77cf7bb1949","Type":"ContainerStarted","Data":"645700e9840d03a6498df4c547dda7257e9514fb9097892dd769b625fc198134"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.283185 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.285715 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" event={"ID":"10e2de2d-777c-4e7a-959c-70ed5c3f456d","Type":"ContainerStarted","Data":"4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.285767 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" event={"ID":"10e2de2d-777c-4e7a-959c-70ed5c3f456d","Type":"ContainerStarted","Data":"ad99138b9dc8ed260db6d1b951f5ac8f52426ad5671405828c41379c50849ec2"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.285780 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" podUID="10e2de2d-777c-4e7a-959c-70ed5c3f456d" containerName="route-controller-manager" containerID="cri-o://4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206" gracePeriod=30 Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.285884 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.287410 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mx2js" event={"ID":"c7f7e62f-ce78-4588-994f-8ab17d7821d1","Type":"ContainerStarted","Data":"683623abef4db9d6cbf1b4ab2b2e6f815f48c8fbb1619462a0dfaa0fcd22de4e"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.287431 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mx2js" event={"ID":"c7f7e62f-ce78-4588-994f-8ab17d7821d1","Type":"ContainerStarted","Data":"0cd0cec2d0671947f82b490709913059ed4e5fb8eaa61bd9e6d9e0d51ad9e305"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.290679 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.293182 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9c715fa4-49ca-4772-bb7d-11c742f07ec6","Type":"ContainerStarted","Data":"1239e7b90accc5254492b1fab3f59a72b1bcdb67dec5b694532e7954148d801f"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.293322 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9c715fa4-49ca-4772-bb7d-11c742f07ec6","Type":"ContainerStarted","Data":"55fbafd3bbd5eb39047009a7d7961070f2bfe7beab46426388e6169e0f4b1a89"} Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.294250 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:19 crc kubenswrapper[4873]: E0121 00:09:19.295153 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xv5c9" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" Jan 21 00:09:19 crc kubenswrapper[4873]: E0121 00:09:19.295250 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-n2txz" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" Jan 21 00:09:19 crc kubenswrapper[4873]: E0121 00:09:19.295604 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hkdsj" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.322389 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.322358887 podStartE2EDuration="3.322358887s" podCreationTimestamp="2026-01-21 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:19.318356029 +0000 UTC m=+191.558223695" watchObservedRunningTime="2026-01-21 00:09:19.322358887 +0000 UTC m=+191.562226543" Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.337140 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=36.337108394 podStartE2EDuration="36.337108394s" podCreationTimestamp="2026-01-21 00:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:19.333959341 +0000 UTC m=+191.573826997" watchObservedRunningTime="2026-01-21 00:09:19.337108394 +0000 UTC m=+191.576976060" Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.392382 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" podStartSLOduration=29.392359362 podStartE2EDuration="29.392359362s" podCreationTimestamp="2026-01-21 00:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:19.390975681 +0000 UTC m=+191.630843347" watchObservedRunningTime="2026-01-21 00:09:19.392359362 +0000 UTC m=+191.632227018" Jan 21 00:09:19 crc kubenswrapper[4873]: I0121 00:09:19.423518 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" podStartSLOduration=9.423500954 podStartE2EDuration="9.423500954s" podCreationTimestamp="2026-01-21 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:19.422683211 +0000 UTC m=+191.662550857" watchObservedRunningTime="2026-01-21 00:09:19.423500954 +0000 UTC m=+191.663368600" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.204498 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.236752 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw"] Jan 21 00:09:20 crc kubenswrapper[4873]: E0121 00:09:20.237245 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e2de2d-777c-4e7a-959c-70ed5c3f456d" containerName="route-controller-manager" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.237273 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e2de2d-777c-4e7a-959c-70ed5c3f456d" containerName="route-controller-manager" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.237393 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e2de2d-777c-4e7a-959c-70ed5c3f456d" containerName="route-controller-manager" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.239012 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.248800 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw"] Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.274683 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnbxs\" (UniqueName: \"kubernetes.io/projected/7310a5c0-3f60-4c9d-a541-e837b7db252e-kube-api-access-tnbxs\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.274731 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-client-ca\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.274850 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7310a5c0-3f60-4c9d-a541-e837b7db252e-serving-cert\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.275046 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-config\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.301377 4873 generic.go:334] "Generic (PLEG): container finished" podID="9c715fa4-49ca-4772-bb7d-11c742f07ec6" containerID="1239e7b90accc5254492b1fab3f59a72b1bcdb67dec5b694532e7954148d801f" exitCode=0 Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.301442 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9c715fa4-49ca-4772-bb7d-11c742f07ec6","Type":"ContainerDied","Data":"1239e7b90accc5254492b1fab3f59a72b1bcdb67dec5b694532e7954148d801f"} Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.308485 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q5pv" event={"ID":"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc","Type":"ContainerStarted","Data":"331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b"} Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.310142 4873 generic.go:334] "Generic (PLEG): container finished" podID="ad1cbce2-d6c1-4a96-9581-e5b609be2fcf" containerID="fec56de7a385c984c43fc3b709d2a567dafa7bef15115f0609ef4a4dc503f91f" exitCode=0 Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.310226 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf","Type":"ContainerDied","Data":"fec56de7a385c984c43fc3b709d2a567dafa7bef15115f0609ef4a4dc503f91f"} Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.311831 4873 generic.go:334] "Generic (PLEG): container finished" podID="10e2de2d-777c-4e7a-959c-70ed5c3f456d" containerID="4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206" exitCode=0 Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.311892 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" event={"ID":"10e2de2d-777c-4e7a-959c-70ed5c3f456d","Type":"ContainerDied","Data":"4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206"} Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.311910 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" event={"ID":"10e2de2d-777c-4e7a-959c-70ed5c3f456d","Type":"ContainerDied","Data":"ad99138b9dc8ed260db6d1b951f5ac8f52426ad5671405828c41379c50849ec2"} Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.311927 4873 scope.go:117] "RemoveContainer" containerID="4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.311958 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.315374 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mx2js" event={"ID":"c7f7e62f-ce78-4588-994f-8ab17d7821d1","Type":"ContainerStarted","Data":"d4a057b08e276ee9eb4a4f799d44f6e4c1ed208391372c1b0879d85fdf9f5a02"} Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.331858 4873 scope.go:117] "RemoveContainer" containerID="4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206" Jan 21 00:09:20 crc kubenswrapper[4873]: E0121 00:09:20.337812 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206\": container with ID starting with 4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206 not found: ID does not exist" containerID="4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.337857 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206"} err="failed to get container status \"4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206\": rpc error: code = NotFound desc = could not find container \"4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206\": container with ID starting with 4ec9313ad673ac5e357e367e4a72cb70b06a31e02a0e39775785933e09eaa206 not found: ID does not exist" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.366079 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6q5pv" podStartSLOduration=3.310755764 podStartE2EDuration="50.366063445s" podCreationTimestamp="2026-01-21 00:08:30 +0000 UTC" firstStartedPulling="2026-01-21 00:08:32.945185771 +0000 UTC m=+145.185053417" lastFinishedPulling="2026-01-21 00:09:20.000493452 +0000 UTC m=+192.240361098" observedRunningTime="2026-01-21 00:09:20.34937089 +0000 UTC m=+192.589238536" watchObservedRunningTime="2026-01-21 00:09:20.366063445 +0000 UTC m=+192.605931091" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.375777 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d8ns\" (UniqueName: \"kubernetes.io/projected/10e2de2d-777c-4e7a-959c-70ed5c3f456d-kube-api-access-2d8ns\") pod \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.375819 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e2de2d-777c-4e7a-959c-70ed5c3f456d-serving-cert\") pod \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.375847 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-config\") pod \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.375883 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-client-ca\") pod \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\" (UID: \"10e2de2d-777c-4e7a-959c-70ed5c3f456d\") " Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.375960 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-config\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.375994 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnbxs\" (UniqueName: \"kubernetes.io/projected/7310a5c0-3f60-4c9d-a541-e837b7db252e-kube-api-access-tnbxs\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.376013 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-client-ca\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.376054 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7310a5c0-3f60-4c9d-a541-e837b7db252e-serving-cert\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.378228 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-client-ca\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.378534 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-config" (OuterVolumeSpecName: "config") pod "10e2de2d-777c-4e7a-959c-70ed5c3f456d" (UID: "10e2de2d-777c-4e7a-959c-70ed5c3f456d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.379200 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-config\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.379247 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-client-ca" (OuterVolumeSpecName: "client-ca") pod "10e2de2d-777c-4e7a-959c-70ed5c3f456d" (UID: "10e2de2d-777c-4e7a-959c-70ed5c3f456d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.385311 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10e2de2d-777c-4e7a-959c-70ed5c3f456d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "10e2de2d-777c-4e7a-959c-70ed5c3f456d" (UID: "10e2de2d-777c-4e7a-959c-70ed5c3f456d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.385659 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mx2js" podStartSLOduration=173.385642605 podStartE2EDuration="2m53.385642605s" podCreationTimestamp="2026-01-21 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:20.385162932 +0000 UTC m=+192.625030578" watchObservedRunningTime="2026-01-21 00:09:20.385642605 +0000 UTC m=+192.625510251" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.386774 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7310a5c0-3f60-4c9d-a541-e837b7db252e-serving-cert\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.390139 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10e2de2d-777c-4e7a-959c-70ed5c3f456d-kube-api-access-2d8ns" (OuterVolumeSpecName: "kube-api-access-2d8ns") pod "10e2de2d-777c-4e7a-959c-70ed5c3f456d" (UID: "10e2de2d-777c-4e7a-959c-70ed5c3f456d"). InnerVolumeSpecName "kube-api-access-2d8ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.395172 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnbxs\" (UniqueName: \"kubernetes.io/projected/7310a5c0-3f60-4c9d-a541-e837b7db252e-kube-api-access-tnbxs\") pod \"route-controller-manager-7d9578f5cb-rfjlw\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.477279 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.477334 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d8ns\" (UniqueName: \"kubernetes.io/projected/10e2de2d-777c-4e7a-959c-70ed5c3f456d-kube-api-access-2d8ns\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.477349 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e2de2d-777c-4e7a-959c-70ed5c3f456d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.477360 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e2de2d-777c-4e7a-959c-70ed5c3f456d-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.556880 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.662465 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv"] Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.664002 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-2vnrv"] Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.744683 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw"] Jan 21 00:09:20 crc kubenswrapper[4873]: W0121 00:09:20.753803 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7310a5c0_3f60_4c9d_a541_e837b7db252e.slice/crio-b7dacb08c208292db9581253847c225b5242f0e4a8fa3037b70b193ddc08948a WatchSource:0}: Error finding container b7dacb08c208292db9581253847c225b5242f0e4a8fa3037b70b193ddc08948a: Status 404 returned error can't find the container with id b7dacb08c208292db9581253847c225b5242f0e4a8fa3037b70b193ddc08948a Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.837449 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.838445 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.843532 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.985145 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-var-lock\") pod \"installer-9-crc\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.985446 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-kubelet-dir\") pod \"installer-9-crc\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:20 crc kubenswrapper[4873]: I0121 00:09:20.986844 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07e2005b-07a3-41c9-b5a8-45faebbedbff-kube-api-access\") pod \"installer-9-crc\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.087709 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07e2005b-07a3-41c9-b5a8-45faebbedbff-kube-api-access\") pod \"installer-9-crc\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.087778 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-var-lock\") pod \"installer-9-crc\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.087830 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-kubelet-dir\") pod \"installer-9-crc\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.087897 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-kubelet-dir\") pod \"installer-9-crc\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.087927 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-var-lock\") pod \"installer-9-crc\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.111146 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07e2005b-07a3-41c9-b5a8-45faebbedbff-kube-api-access\") pod \"installer-9-crc\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.160493 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.279753 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.280093 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.325985 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" event={"ID":"7310a5c0-3f60-4c9d-a541-e837b7db252e","Type":"ContainerStarted","Data":"58673fcef7245049a4223bc43f623b77dbba926ba9b774c625e654df34c2ab45"} Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.326027 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" event={"ID":"7310a5c0-3f60-4c9d-a541-e837b7db252e","Type":"ContainerStarted","Data":"b7dacb08c208292db9581253847c225b5242f0e4a8fa3037b70b193ddc08948a"} Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.326865 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.349283 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" podStartSLOduration=11.34926663 podStartE2EDuration="11.34926663s" podCreationTimestamp="2026-01-21 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:21.346971773 +0000 UTC m=+193.586839419" watchObservedRunningTime="2026-01-21 00:09:21.34926663 +0000 UTC m=+193.589134276" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.463215 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.601140 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.735453 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.739129 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.794710 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kubelet-dir\") pod \"9c715fa4-49ca-4772-bb7d-11c742f07ec6\" (UID: \"9c715fa4-49ca-4772-bb7d-11c742f07ec6\") " Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.794773 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kubelet-dir\") pod \"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf\" (UID: \"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf\") " Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.794819 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kube-api-access\") pod \"9c715fa4-49ca-4772-bb7d-11c742f07ec6\" (UID: \"9c715fa4-49ca-4772-bb7d-11c742f07ec6\") " Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.794845 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kube-api-access\") pod \"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf\" (UID: \"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf\") " Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.794840 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9c715fa4-49ca-4772-bb7d-11c742f07ec6" (UID: "9c715fa4-49ca-4772-bb7d-11c742f07ec6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.794869 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ad1cbce2-d6c1-4a96-9581-e5b609be2fcf" (UID: "ad1cbce2-d6c1-4a96-9581-e5b609be2fcf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.801799 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9c715fa4-49ca-4772-bb7d-11c742f07ec6" (UID: "9c715fa4-49ca-4772-bb7d-11c742f07ec6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.802477 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ad1cbce2-d6c1-4a96-9581-e5b609be2fcf" (UID: "ad1cbce2-d6c1-4a96-9581-e5b609be2fcf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.896496 4873 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.897046 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.897062 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad1cbce2-d6c1-4a96-9581-e5b609be2fcf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:21 crc kubenswrapper[4873]: I0121 00:09:21.897074 4873 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c715fa4-49ca-4772-bb7d-11c742f07ec6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.072996 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10e2de2d-777c-4e7a-959c-70ed5c3f456d" path="/var/lib/kubelet/pods/10e2de2d-777c-4e7a-959c-70ed5c3f456d/volumes" Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.339630 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.339613 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9c715fa4-49ca-4772-bb7d-11c742f07ec6","Type":"ContainerDied","Data":"55fbafd3bbd5eb39047009a7d7961070f2bfe7beab46426388e6169e0f4b1a89"} Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.339993 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55fbafd3bbd5eb39047009a7d7961070f2bfe7beab46426388e6169e0f4b1a89" Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.342191 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07e2005b-07a3-41c9-b5a8-45faebbedbff","Type":"ContainerStarted","Data":"a8b7e6a64007b0b9be519b0e9f352ccd93a50c64b9ed290c37695ea13329b463"} Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.342233 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07e2005b-07a3-41c9-b5a8-45faebbedbff","Type":"ContainerStarted","Data":"488069edb2ab25e89bcb5c4c15054ba89df659ccc49dfad9a5a456f363c4cc88"} Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.343742 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.343905 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ad1cbce2-d6c1-4a96-9581-e5b609be2fcf","Type":"ContainerDied","Data":"5ba1f84fbab2201908d8d1f419b8c26dc08167b77160ee7b50fa376e55e58c14"} Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.343938 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ba1f84fbab2201908d8d1f419b8c26dc08167b77160ee7b50fa376e55e58c14" Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.366778 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.366762222 podStartE2EDuration="2.366762222s" podCreationTimestamp="2026-01-21 00:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:22.365046392 +0000 UTC m=+194.604914038" watchObservedRunningTime="2026-01-21 00:09:22.366762222 +0000 UTC m=+194.606629868" Jan 21 00:09:22 crc kubenswrapper[4873]: I0121 00:09:22.366903 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6q5pv" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="registry-server" probeResult="failure" output=< Jan 21 00:09:22 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Jan 21 00:09:22 crc kubenswrapper[4873]: > Jan 21 00:09:25 crc kubenswrapper[4873]: I0121 00:09:25.608493 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7d257"] Jan 21 00:09:30 crc kubenswrapper[4873]: I0121 00:09:30.136302 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64f47f75c7-8q5nq"] Jan 21 00:09:30 crc kubenswrapper[4873]: I0121 00:09:30.137336 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" podUID="b4246056-62cc-4225-a780-c77cf7bb1949" containerName="controller-manager" containerID="cri-o://30668052e319b98a7e984defcad85c90da00e29d474b1b75b2145bfcc0a83d7d" gracePeriod=30 Jan 21 00:09:30 crc kubenswrapper[4873]: I0121 00:09:30.152323 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw"] Jan 21 00:09:30 crc kubenswrapper[4873]: I0121 00:09:30.152529 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" podUID="7310a5c0-3f60-4c9d-a541-e837b7db252e" containerName="route-controller-manager" containerID="cri-o://58673fcef7245049a4223bc43f623b77dbba926ba9b774c625e654df34c2ab45" gracePeriod=30 Jan 21 00:09:30 crc kubenswrapper[4873]: I0121 00:09:30.558004 4873 patch_prober.go:28] interesting pod/route-controller-manager-7d9578f5cb-rfjlw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 21 00:09:30 crc kubenswrapper[4873]: I0121 00:09:30.558068 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" podUID="7310a5c0-3f60-4c9d-a541-e837b7db252e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 21 00:09:31 crc kubenswrapper[4873]: I0121 00:09:31.630158 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:09:31 crc kubenswrapper[4873]: I0121 00:09:31.631060 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:09:31 crc kubenswrapper[4873]: I0121 00:09:31.631179 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:09:31 crc kubenswrapper[4873]: I0121 00:09:31.631726 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:09:31 crc kubenswrapper[4873]: I0121 00:09:31.631874 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46" gracePeriod=600 Jan 21 00:09:32 crc kubenswrapper[4873]: I0121 00:09:32.326771 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6q5pv" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="registry-server" probeResult="failure" output=< Jan 21 00:09:32 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Jan 21 00:09:32 crc kubenswrapper[4873]: > Jan 21 00:09:36 crc kubenswrapper[4873]: I0121 00:09:36.257039 4873 generic.go:334] "Generic (PLEG): container finished" podID="7310a5c0-3f60-4c9d-a541-e837b7db252e" containerID="58673fcef7245049a4223bc43f623b77dbba926ba9b774c625e654df34c2ab45" exitCode=0 Jan 21 00:09:36 crc kubenswrapper[4873]: I0121 00:09:36.257194 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" event={"ID":"7310a5c0-3f60-4c9d-a541-e837b7db252e","Type":"ContainerDied","Data":"58673fcef7245049a4223bc43f623b77dbba926ba9b774c625e654df34c2ab45"} Jan 21 00:09:36 crc kubenswrapper[4873]: I0121 00:09:36.259328 4873 generic.go:334] "Generic (PLEG): container finished" podID="b4246056-62cc-4225-a780-c77cf7bb1949" containerID="30668052e319b98a7e984defcad85c90da00e29d474b1b75b2145bfcc0a83d7d" exitCode=0 Jan 21 00:09:36 crc kubenswrapper[4873]: I0121 00:09:36.259379 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" event={"ID":"b4246056-62cc-4225-a780-c77cf7bb1949","Type":"ContainerDied","Data":"30668052e319b98a7e984defcad85c90da00e29d474b1b75b2145bfcc0a83d7d"} Jan 21 00:09:37 crc kubenswrapper[4873]: I0121 00:09:37.271630 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46" exitCode=0 Jan 21 00:09:37 crc kubenswrapper[4873]: I0121 00:09:37.271700 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46"} Jan 21 00:09:38 crc kubenswrapper[4873]: I0121 00:09:38.529655 4873 patch_prober.go:28] interesting pod/controller-manager-64f47f75c7-8q5nq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 00:09:38 crc kubenswrapper[4873]: I0121 00:09:38.529752 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" podUID="b4246056-62cc-4225-a780-c77cf7bb1949" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.056462 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.090533 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7310a5c0-3f60-4c9d-a541-e837b7db252e-serving-cert\") pod \"7310a5c0-3f60-4c9d-a541-e837b7db252e\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.090807 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-client-ca\") pod \"7310a5c0-3f60-4c9d-a541-e837b7db252e\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.092889 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnbxs\" (UniqueName: \"kubernetes.io/projected/7310a5c0-3f60-4c9d-a541-e837b7db252e-kube-api-access-tnbxs\") pod \"7310a5c0-3f60-4c9d-a541-e837b7db252e\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.093030 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-config\") pod \"7310a5c0-3f60-4c9d-a541-e837b7db252e\" (UID: \"7310a5c0-3f60-4c9d-a541-e837b7db252e\") " Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.094426 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-config" (OuterVolumeSpecName: "config") pod "7310a5c0-3f60-4c9d-a541-e837b7db252e" (UID: "7310a5c0-3f60-4c9d-a541-e837b7db252e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.095074 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-client-ca" (OuterVolumeSpecName: "client-ca") pod "7310a5c0-3f60-4c9d-a541-e837b7db252e" (UID: "7310a5c0-3f60-4c9d-a541-e837b7db252e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.096822 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh"] Jan 21 00:09:41 crc kubenswrapper[4873]: E0121 00:09:41.097235 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c715fa4-49ca-4772-bb7d-11c742f07ec6" containerName="pruner" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.097351 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c715fa4-49ca-4772-bb7d-11c742f07ec6" containerName="pruner" Jan 21 00:09:41 crc kubenswrapper[4873]: E0121 00:09:41.097467 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7310a5c0-3f60-4c9d-a541-e837b7db252e" containerName="route-controller-manager" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.097573 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7310a5c0-3f60-4c9d-a541-e837b7db252e" containerName="route-controller-manager" Jan 21 00:09:41 crc kubenswrapper[4873]: E0121 00:09:41.097673 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1cbce2-d6c1-4a96-9581-e5b609be2fcf" containerName="pruner" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.097752 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1cbce2-d6c1-4a96-9581-e5b609be2fcf" containerName="pruner" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.097951 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="7310a5c0-3f60-4c9d-a541-e837b7db252e" containerName="route-controller-manager" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.098052 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad1cbce2-d6c1-4a96-9581-e5b609be2fcf" containerName="pruner" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.098133 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c715fa4-49ca-4772-bb7d-11c742f07ec6" containerName="pruner" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.098761 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.108992 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7310a5c0-3f60-4c9d-a541-e837b7db252e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7310a5c0-3f60-4c9d-a541-e837b7db252e" (UID: "7310a5c0-3f60-4c9d-a541-e837b7db252e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.113876 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7310a5c0-3f60-4c9d-a541-e837b7db252e-kube-api-access-tnbxs" (OuterVolumeSpecName: "kube-api-access-tnbxs") pod "7310a5c0-3f60-4c9d-a541-e837b7db252e" (UID: "7310a5c0-3f60-4c9d-a541-e837b7db252e"). InnerVolumeSpecName "kube-api-access-tnbxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.117748 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh"] Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.141245 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.194191 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4246056-62cc-4225-a780-c77cf7bb1949-serving-cert\") pod \"b4246056-62cc-4225-a780-c77cf7bb1949\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.194250 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-config\") pod \"b4246056-62cc-4225-a780-c77cf7bb1949\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.194291 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcprg\" (UniqueName: \"kubernetes.io/projected/b4246056-62cc-4225-a780-c77cf7bb1949-kube-api-access-xcprg\") pod \"b4246056-62cc-4225-a780-c77cf7bb1949\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.194328 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-client-ca\") pod \"b4246056-62cc-4225-a780-c77cf7bb1949\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.194377 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-proxy-ca-bundles\") pod \"b4246056-62cc-4225-a780-c77cf7bb1949\" (UID: \"b4246056-62cc-4225-a780-c77cf7bb1949\") " Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195058 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-client-ca" (OuterVolumeSpecName: "client-ca") pod "b4246056-62cc-4225-a780-c77cf7bb1949" (UID: "b4246056-62cc-4225-a780-c77cf7bb1949"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195097 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-config" (OuterVolumeSpecName: "config") pod "b4246056-62cc-4225-a780-c77cf7bb1949" (UID: "b4246056-62cc-4225-a780-c77cf7bb1949"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195515 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b4246056-62cc-4225-a780-c77cf7bb1949" (UID: "b4246056-62cc-4225-a780-c77cf7bb1949"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195722 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-config\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195779 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaa7856-cb81-4a67-b4bb-13617aa84e31-serving-cert\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195836 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j98gz\" (UniqueName: \"kubernetes.io/projected/ffaa7856-cb81-4a67-b4bb-13617aa84e31-kube-api-access-j98gz\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195869 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-client-ca\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195925 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195946 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7310a5c0-3f60-4c9d-a541-e837b7db252e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195961 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnbxs\" (UniqueName: \"kubernetes.io/projected/7310a5c0-3f60-4c9d-a541-e837b7db252e-kube-api-access-tnbxs\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195975 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195987 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.195999 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7310a5c0-3f60-4c9d-a541-e837b7db252e-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.196011 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4246056-62cc-4225-a780-c77cf7bb1949-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.197653 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4246056-62cc-4225-a780-c77cf7bb1949-kube-api-access-xcprg" (OuterVolumeSpecName: "kube-api-access-xcprg") pod "b4246056-62cc-4225-a780-c77cf7bb1949" (UID: "b4246056-62cc-4225-a780-c77cf7bb1949"). InnerVolumeSpecName "kube-api-access-xcprg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.197926 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4246056-62cc-4225-a780-c77cf7bb1949-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b4246056-62cc-4225-a780-c77cf7bb1949" (UID: "b4246056-62cc-4225-a780-c77cf7bb1949"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.297102 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-config\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.297173 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaa7856-cb81-4a67-b4bb-13617aa84e31-serving-cert\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.297208 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j98gz\" (UniqueName: \"kubernetes.io/projected/ffaa7856-cb81-4a67-b4bb-13617aa84e31-kube-api-access-j98gz\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.297229 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-client-ca\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.297281 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4246056-62cc-4225-a780-c77cf7bb1949-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.297293 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcprg\" (UniqueName: \"kubernetes.io/projected/b4246056-62cc-4225-a780-c77cf7bb1949-kube-api-access-xcprg\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.298098 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-client-ca\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.299711 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-config\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.302513 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaa7856-cb81-4a67-b4bb-13617aa84e31-serving-cert\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.314214 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j98gz\" (UniqueName: \"kubernetes.io/projected/ffaa7856-cb81-4a67-b4bb-13617aa84e31-kube-api-access-j98gz\") pod \"route-controller-manager-8698846dc-ff7jh\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.347341 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.395265 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.460101 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.583537 4873 patch_prober.go:28] interesting pod/route-controller-manager-7d9578f5cb-rfjlw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.584012 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" podUID="7310a5c0-3f60-4c9d-a541-e837b7db252e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.603345 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.603390 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw" event={"ID":"7310a5c0-3f60-4c9d-a541-e837b7db252e","Type":"ContainerDied","Data":"b7dacb08c208292db9581253847c225b5242f0e4a8fa3037b70b193ddc08948a"} Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.603494 4873 scope.go:117] "RemoveContainer" containerID="58673fcef7245049a4223bc43f623b77dbba926ba9b774c625e654df34c2ab45" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.607460 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"ecbc66699ceff75e71816474db48ead8996d0fcac33b380626bf21ba56881845"} Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.613715 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" event={"ID":"b4246056-62cc-4225-a780-c77cf7bb1949","Type":"ContainerDied","Data":"645700e9840d03a6498df4c547dda7257e9514fb9097892dd769b625fc198134"} Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.613770 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f47f75c7-8q5nq" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.633422 4873 scope.go:117] "RemoveContainer" containerID="30668052e319b98a7e984defcad85c90da00e29d474b1b75b2145bfcc0a83d7d" Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.652376 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw"] Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.657553 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9578f5cb-rfjlw"] Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.686636 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64f47f75c7-8q5nq"] Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.689144 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64f47f75c7-8q5nq"] Jan 21 00:09:41 crc kubenswrapper[4873]: I0121 00:09:41.998123 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh"] Jan 21 00:09:42 crc kubenswrapper[4873]: W0121 00:09:42.063305 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffaa7856_cb81_4a67_b4bb_13617aa84e31.slice/crio-2803a651975a5edc4da91a81e7be7d0e6d50f334a0f6c916259a7323c3f94e6b WatchSource:0}: Error finding container 2803a651975a5edc4da91a81e7be7d0e6d50f334a0f6c916259a7323c3f94e6b: Status 404 returned error can't find the container with id 2803a651975a5edc4da91a81e7be7d0e6d50f334a0f6c916259a7323c3f94e6b Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.068860 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7310a5c0-3f60-4c9d-a541-e837b7db252e" path="/var/lib/kubelet/pods/7310a5c0-3f60-4c9d-a541-e837b7db252e/volumes" Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.069551 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4246056-62cc-4225-a780-c77cf7bb1949" path="/var/lib/kubelet/pods/b4246056-62cc-4225-a780-c77cf7bb1949/volumes" Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.620720 4873 generic.go:334] "Generic (PLEG): container finished" podID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerID="aae83278df86999a8dc62b4f1bc21a81a90f43708d2485eef52c08aff16ae835" exitCode=0 Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.620789 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqn92" event={"ID":"72d11f1d-5410-4a20-9b17-2f04a831a398","Type":"ContainerDied","Data":"aae83278df86999a8dc62b4f1bc21a81a90f43708d2485eef52c08aff16ae835"} Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.623356 4873 generic.go:334] "Generic (PLEG): container finished" podID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerID="a8cfa958bcaca15e1d333d3c4deaed3d01f412beb565639753319ebcd50ffd3c" exitCode=0 Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.623435 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xv5c9" event={"ID":"e16d4483-cf0d-4977-bad2-ed10f6dde4c7","Type":"ContainerDied","Data":"a8cfa958bcaca15e1d333d3c4deaed3d01f412beb565639753319ebcd50ffd3c"} Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.627336 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" event={"ID":"ffaa7856-cb81-4a67-b4bb-13617aa84e31","Type":"ContainerStarted","Data":"77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe"} Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.627403 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" event={"ID":"ffaa7856-cb81-4a67-b4bb-13617aa84e31","Type":"ContainerStarted","Data":"2803a651975a5edc4da91a81e7be7d0e6d50f334a0f6c916259a7323c3f94e6b"} Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.627445 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.629297 4873 generic.go:334] "Generic (PLEG): container finished" podID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerID="ef5bd9f04718f58ae0cffcdc6996dd85e10016985a6afec09475eb65bbec795d" exitCode=0 Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.629347 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbxjc" event={"ID":"c0fd5454-ac0d-4873-a5cc-d690883223a4","Type":"ContainerDied","Data":"ef5bd9f04718f58ae0cffcdc6996dd85e10016985a6afec09475eb65bbec795d"} Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.632007 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4273d27-1592-4728-8bd2-14888045fffb" containerID="93fdb7b1b0fac8e48ebfe56f482ef09cad576a2f49fd7da6b8e45b38efee3a5b" exitCode=0 Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.632078 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqslw" event={"ID":"f4273d27-1592-4728-8bd2-14888045fffb","Type":"ContainerDied","Data":"93fdb7b1b0fac8e48ebfe56f482ef09cad576a2f49fd7da6b8e45b38efee3a5b"} Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.635678 4873 generic.go:334] "Generic (PLEG): container finished" podID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerID="c32c5f58c598d88e121a507e4eab73137dd8227b9877b488b0d07b20ca21a6cb" exitCode=0 Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.635714 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2txz" event={"ID":"53754e90-fc88-49e7-ad8e-50552d9bc151","Type":"ContainerDied","Data":"c32c5f58c598d88e121a507e4eab73137dd8227b9877b488b0d07b20ca21a6cb"} Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.638875 4873 generic.go:334] "Generic (PLEG): container finished" podID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerID="f961316512fdfa3a4751438181ea9a35784a8291f772d953e026ba7f58418db2" exitCode=0 Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.638909 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkdsj" event={"ID":"bca22ae0-3824-47b7-8520-f7b7f0f657ab","Type":"ContainerDied","Data":"f961316512fdfa3a4751438181ea9a35784a8291f772d953e026ba7f58418db2"} Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.641269 4873 generic.go:334] "Generic (PLEG): container finished" podID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerID="e42ab9fed38fcb233d2ade309a582bb7467fdad0c8149b18ba2cb8b3744537cb" exitCode=0 Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.641335 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjfd5" event={"ID":"c2aa2871-6143-459b-9607-1fdde2f2f22c","Type":"ContainerDied","Data":"e42ab9fed38fcb233d2ade309a582bb7467fdad0c8149b18ba2cb8b3744537cb"} Jan 21 00:09:42 crc kubenswrapper[4873]: I0121 00:09:42.698400 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" podStartSLOduration=12.69837441 podStartE2EDuration="12.69837441s" podCreationTimestamp="2026-01-21 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:42.694623043 +0000 UTC m=+214.934490689" watchObservedRunningTime="2026-01-21 00:09:42.69837441 +0000 UTC m=+214.938242066" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.074949 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.650575 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z"] Jan 21 00:09:43 crc kubenswrapper[4873]: E0121 00:09:43.651151 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4246056-62cc-4225-a780-c77cf7bb1949" containerName="controller-manager" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.651167 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4246056-62cc-4225-a780-c77cf7bb1949" containerName="controller-manager" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.651300 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4246056-62cc-4225-a780-c77cf7bb1949" containerName="controller-manager" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.651705 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.652676 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjfd5" event={"ID":"c2aa2871-6143-459b-9607-1fdde2f2f22c","Type":"ContainerStarted","Data":"cded240bd90d0eec75e383075d390db2aa74fb120cf1a28be65ef5b4fdead0d7"} Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.655260 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.655271 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.655312 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.655468 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.655737 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.655784 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.658493 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqslw" event={"ID":"f4273d27-1592-4728-8bd2-14888045fffb","Type":"ContainerStarted","Data":"bfd89a50d439dea40bea4e18e4db98e950ebc60ef86e3d87813bcef6abe128d8"} Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.662569 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2txz" event={"ID":"53754e90-fc88-49e7-ad8e-50552d9bc151","Type":"ContainerStarted","Data":"9999b359f66441b07812d733e48fd02a512e418c048b9797f0c6937a2561124b"} Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.663005 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z"] Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.670385 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.670539 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqn92" event={"ID":"72d11f1d-5410-4a20-9b17-2f04a831a398","Type":"ContainerStarted","Data":"c7304b2e21b6bc3558e09d266937c26f7826381acc9de63e6b6e68aab12a6288"} Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.681922 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xv5c9" event={"ID":"e16d4483-cf0d-4977-bad2-ed10f6dde4c7","Type":"ContainerStarted","Data":"15b517abec7d6c15d7b1361bc1024e3dc7dcb3970a0d1bbcbdbe3936f5bfb21f"} Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.691314 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbxjc" event={"ID":"c0fd5454-ac0d-4873-a5cc-d690883223a4","Type":"ContainerStarted","Data":"dabcf6ab0fc7e488489beeb82ccd9104289cfa52338d880c84bf34ba2c909c5a"} Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.694469 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkdsj" event={"ID":"bca22ae0-3824-47b7-8520-f7b7f0f657ab","Type":"ContainerStarted","Data":"3f971b98b2b9344536efcf4dd8b96d5c67d429d665abd13a749924b7fdd65690"} Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.707985 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qjfd5" podStartSLOduration=2.627819662 podStartE2EDuration="1m12.707968516s" podCreationTimestamp="2026-01-21 00:08:31 +0000 UTC" firstStartedPulling="2026-01-21 00:08:33.084899031 +0000 UTC m=+145.324766677" lastFinishedPulling="2026-01-21 00:09:43.165047885 +0000 UTC m=+215.404915531" observedRunningTime="2026-01-21 00:09:43.706752681 +0000 UTC m=+215.946620337" watchObservedRunningTime="2026-01-21 00:09:43.707968516 +0000 UTC m=+215.947836172" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.726355 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n2txz" podStartSLOduration=3.1667666309999998 podStartE2EDuration="1m10.726337553s" podCreationTimestamp="2026-01-21 00:08:33 +0000 UTC" firstStartedPulling="2026-01-21 00:08:35.550782232 +0000 UTC m=+147.790649878" lastFinishedPulling="2026-01-21 00:09:43.110353154 +0000 UTC m=+215.350220800" observedRunningTime="2026-01-21 00:09:43.724538642 +0000 UTC m=+215.964406298" watchObservedRunningTime="2026-01-21 00:09:43.726337553 +0000 UTC m=+215.966205199" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.734608 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-proxy-ca-bundles\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.734688 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-client-ca\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.734796 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-config\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.734824 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zlt\" (UniqueName: \"kubernetes.io/projected/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-kube-api-access-h4zlt\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.734866 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-serving-cert\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.761899 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dqn92" podStartSLOduration=2.151766487 podStartE2EDuration="1m9.761875974s" podCreationTimestamp="2026-01-21 00:08:34 +0000 UTC" firstStartedPulling="2026-01-21 00:08:35.567321502 +0000 UTC m=+147.807189148" lastFinishedPulling="2026-01-21 00:09:43.177430989 +0000 UTC m=+215.417298635" observedRunningTime="2026-01-21 00:09:43.759731921 +0000 UTC m=+215.999599567" watchObservedRunningTime="2026-01-21 00:09:43.761875974 +0000 UTC m=+216.001743620" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.797359 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hqslw" podStartSLOduration=3.340710901 podStartE2EDuration="1m9.797339581s" podCreationTimestamp="2026-01-21 00:08:34 +0000 UTC" firstStartedPulling="2026-01-21 00:08:36.632529506 +0000 UTC m=+148.872397152" lastFinishedPulling="2026-01-21 00:09:43.089158186 +0000 UTC m=+215.329025832" observedRunningTime="2026-01-21 00:09:43.796333982 +0000 UTC m=+216.036201628" watchObservedRunningTime="2026-01-21 00:09:43.797339581 +0000 UTC m=+216.037207227" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.836014 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbxjc" podStartSLOduration=4.020811913 podStartE2EDuration="1m12.835998381s" podCreationTimestamp="2026-01-21 00:08:31 +0000 UTC" firstStartedPulling="2026-01-21 00:08:34.435930476 +0000 UTC m=+146.675798132" lastFinishedPulling="2026-01-21 00:09:43.251116954 +0000 UTC m=+215.490984600" observedRunningTime="2026-01-21 00:09:43.833812728 +0000 UTC m=+216.073680384" watchObservedRunningTime="2026-01-21 00:09:43.835998381 +0000 UTC m=+216.075866027" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.836044 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-proxy-ca-bundles\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.836335 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-client-ca\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.836439 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-config\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.836483 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zlt\" (UniqueName: \"kubernetes.io/projected/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-kube-api-access-h4zlt\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.836524 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-serving-cert\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.837167 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-proxy-ca-bundles\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.837765 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-client-ca\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.838602 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-config\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.843161 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-serving-cert\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.857380 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xv5c9" podStartSLOduration=2.303349894 podStartE2EDuration="1m12.857356894s" podCreationTimestamp="2026-01-21 00:08:31 +0000 UTC" firstStartedPulling="2026-01-21 00:08:32.958932759 +0000 UTC m=+145.198800405" lastFinishedPulling="2026-01-21 00:09:43.512939759 +0000 UTC m=+215.752807405" observedRunningTime="2026-01-21 00:09:43.855287635 +0000 UTC m=+216.095155291" watchObservedRunningTime="2026-01-21 00:09:43.857356894 +0000 UTC m=+216.097224530" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.866227 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zlt\" (UniqueName: \"kubernetes.io/projected/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-kube-api-access-h4zlt\") pod \"controller-manager-6bb5f94d9-5bw4z\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.876383 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hkdsj" podStartSLOduration=3.034257963 podStartE2EDuration="1m11.87636597s" podCreationTimestamp="2026-01-21 00:08:32 +0000 UTC" firstStartedPulling="2026-01-21 00:08:34.440232173 +0000 UTC m=+146.680099819" lastFinishedPulling="2026-01-21 00:09:43.28234018 +0000 UTC m=+215.522207826" observedRunningTime="2026-01-21 00:09:43.874600108 +0000 UTC m=+216.114467754" watchObservedRunningTime="2026-01-21 00:09:43.87636597 +0000 UTC m=+216.116233616" Jan 21 00:09:43 crc kubenswrapper[4873]: I0121 00:09:43.975332 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:44 crc kubenswrapper[4873]: I0121 00:09:44.206325 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z"] Jan 21 00:09:44 crc kubenswrapper[4873]: W0121 00:09:44.221695 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac3bec2_9bed_4cf2_8fce_12bbda05e441.slice/crio-1ca85aebfb8521f52ebbc2431340b63ad2bd69bc3192800e80d79684aa518920 WatchSource:0}: Error finding container 1ca85aebfb8521f52ebbc2431340b63ad2bd69bc3192800e80d79684aa518920: Status 404 returned error can't find the container with id 1ca85aebfb8521f52ebbc2431340b63ad2bd69bc3192800e80d79684aa518920 Jan 21 00:09:44 crc kubenswrapper[4873]: I0121 00:09:44.393648 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:09:44 crc kubenswrapper[4873]: I0121 00:09:44.393716 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:09:44 crc kubenswrapper[4873]: I0121 00:09:44.701956 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" event={"ID":"4ac3bec2-9bed-4cf2-8fce-12bbda05e441","Type":"ContainerStarted","Data":"a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a"} Jan 21 00:09:44 crc kubenswrapper[4873]: I0121 00:09:44.702334 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:44 crc kubenswrapper[4873]: I0121 00:09:44.702351 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" event={"ID":"4ac3bec2-9bed-4cf2-8fce-12bbda05e441","Type":"ContainerStarted","Data":"1ca85aebfb8521f52ebbc2431340b63ad2bd69bc3192800e80d79684aa518920"} Jan 21 00:09:44 crc kubenswrapper[4873]: I0121 00:09:44.711614 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:44 crc kubenswrapper[4873]: I0121 00:09:44.716894 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" podStartSLOduration=14.716881171 podStartE2EDuration="14.716881171s" podCreationTimestamp="2026-01-21 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:44.715549293 +0000 UTC m=+216.955416939" watchObservedRunningTime="2026-01-21 00:09:44.716881171 +0000 UTC m=+216.956748817" Jan 21 00:09:44 crc kubenswrapper[4873]: I0121 00:09:44.804731 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:09:44 crc kubenswrapper[4873]: I0121 00:09:44.804771 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:09:45 crc kubenswrapper[4873]: I0121 00:09:45.437753 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dqn92" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerName="registry-server" probeResult="failure" output=< Jan 21 00:09:45 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Jan 21 00:09:45 crc kubenswrapper[4873]: > Jan 21 00:09:45 crc kubenswrapper[4873]: I0121 00:09:45.837741 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqslw" podUID="f4273d27-1592-4728-8bd2-14888045fffb" containerName="registry-server" probeResult="failure" output=< Jan 21 00:09:45 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Jan 21 00:09:45 crc kubenswrapper[4873]: > Jan 21 00:09:50 crc kubenswrapper[4873]: I0121 00:09:50.149014 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z"] Jan 21 00:09:50 crc kubenswrapper[4873]: I0121 00:09:50.149696 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" podUID="4ac3bec2-9bed-4cf2-8fce-12bbda05e441" containerName="controller-manager" containerID="cri-o://a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a" gracePeriod=30 Jan 21 00:09:50 crc kubenswrapper[4873]: I0121 00:09:50.242168 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh"] Jan 21 00:09:50 crc kubenswrapper[4873]: I0121 00:09:50.242395 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" podUID="ffaa7856-cb81-4a67-b4bb-13617aa84e31" containerName="route-controller-manager" containerID="cri-o://77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe" gracePeriod=30 Jan 21 00:09:50 crc kubenswrapper[4873]: I0121 00:09:50.632696 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" podUID="fa9a91c2-efee-4e59-acaa-c5f236e0f857" containerName="oauth-openshift" containerID="cri-o://2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8" gracePeriod=15 Jan 21 00:09:50 crc kubenswrapper[4873]: E0121 00:09:50.839126 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffaa7856_cb81_4a67_b4bb_13617aa84e31.slice/crio-conmon-77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe.scope\": RecentStats: unable to find data in memory cache]" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.116677 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129614 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrsrj\" (UniqueName: \"kubernetes.io/projected/fa9a91c2-efee-4e59-acaa-c5f236e0f857-kube-api-access-jrsrj\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129657 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-serving-cert\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129697 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-provider-selection\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129726 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-idp-0-file-data\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129743 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-session\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129768 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-login\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129789 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-dir\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129806 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-policies\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129822 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-service-ca\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129847 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-router-certs\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129867 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-error\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129882 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-ocp-branding-template\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129899 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-trusted-ca-bundle\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.129924 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-cliconfig\") pod \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\" (UID: \"fa9a91c2-efee-4e59-acaa-c5f236e0f857\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.133429 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.133914 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.134228 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.135897 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.136398 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9a91c2-efee-4e59-acaa-c5f236e0f857-kube-api-access-jrsrj" (OuterVolumeSpecName: "kube-api-access-jrsrj") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "kube-api-access-jrsrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.138126 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.138241 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.139821 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.144178 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.144411 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.145216 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.145630 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.151671 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.152468 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fa9a91c2-efee-4e59-acaa-c5f236e0f857" (UID: "fa9a91c2-efee-4e59-acaa-c5f236e0f857"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.155586 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-574b75df8-jvwgz"] Jan 21 00:09:51 crc kubenswrapper[4873]: E0121 00:09:51.155804 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9a91c2-efee-4e59-acaa-c5f236e0f857" containerName="oauth-openshift" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.155821 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9a91c2-efee-4e59-acaa-c5f236e0f857" containerName="oauth-openshift" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.155932 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9a91c2-efee-4e59-acaa-c5f236e0f857" containerName="oauth-openshift" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.156329 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.174286 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-574b75df8-jvwgz"] Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.191238 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.218449 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231234 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-serving-cert\") pod \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231292 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-client-ca\") pod \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231329 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-proxy-ca-bundles\") pod \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231366 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaa7856-cb81-4a67-b4bb-13617aa84e31-serving-cert\") pod \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231425 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4zlt\" (UniqueName: \"kubernetes.io/projected/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-kube-api-access-h4zlt\") pod \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231458 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-config\") pod \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231499 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-config\") pod \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231543 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-client-ca\") pod \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\" (UID: \"4ac3bec2-9bed-4cf2-8fce-12bbda05e441\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231648 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j98gz\" (UniqueName: \"kubernetes.io/projected/ffaa7856-cb81-4a67-b4bb-13617aa84e31-kube-api-access-j98gz\") pod \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\" (UID: \"ffaa7856-cb81-4a67-b4bb-13617aa84e31\") " Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231901 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-template-login\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.231947 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.232004 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.232199 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-client-ca" (OuterVolumeSpecName: "client-ca") pod "ffaa7856-cb81-4a67-b4bb-13617aa84e31" (UID: "ffaa7856-cb81-4a67-b4bb-13617aa84e31"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.232260 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4ac3bec2-9bed-4cf2-8fce-12bbda05e441" (UID: "4ac3bec2-9bed-4cf2-8fce-12bbda05e441"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.232925 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-config" (OuterVolumeSpecName: "config") pod "4ac3bec2-9bed-4cf2-8fce-12bbda05e441" (UID: "4ac3bec2-9bed-4cf2-8fce-12bbda05e441"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.233282 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ac3bec2-9bed-4cf2-8fce-12bbda05e441" (UID: "4ac3bec2-9bed-4cf2-8fce-12bbda05e441"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.234858 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.234952 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-template-error\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235040 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235079 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235104 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-config" (OuterVolumeSpecName: "config") pod "ffaa7856-cb81-4a67-b4bb-13617aa84e31" (UID: "ffaa7856-cb81-4a67-b4bb-13617aa84e31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235173 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-audit-dir\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235222 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwvg7\" (UniqueName: \"kubernetes.io/projected/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-kube-api-access-gwvg7\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235259 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235290 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-session\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235323 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-audit-policies\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235418 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235454 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235532 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrsrj\" (UniqueName: \"kubernetes.io/projected/fa9a91c2-efee-4e59-acaa-c5f236e0f857-kube-api-access-jrsrj\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235581 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235601 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235620 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235642 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235660 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235704 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235725 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235746 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235766 4873 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235805 4873 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235825 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235846 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235918 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235943 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235962 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235981 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffaa7856-cb81-4a67-b4bb-13617aa84e31-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.235998 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.236015 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa9a91c2-efee-4e59-acaa-c5f236e0f857-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.237874 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffaa7856-cb81-4a67-b4bb-13617aa84e31-kube-api-access-j98gz" (OuterVolumeSpecName: "kube-api-access-j98gz") pod "ffaa7856-cb81-4a67-b4bb-13617aa84e31" (UID: "ffaa7856-cb81-4a67-b4bb-13617aa84e31"). InnerVolumeSpecName "kube-api-access-j98gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.238042 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffaa7856-cb81-4a67-b4bb-13617aa84e31-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ffaa7856-cb81-4a67-b4bb-13617aa84e31" (UID: "ffaa7856-cb81-4a67-b4bb-13617aa84e31"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.239277 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ac3bec2-9bed-4cf2-8fce-12bbda05e441" (UID: "4ac3bec2-9bed-4cf2-8fce-12bbda05e441"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.244682 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-kube-api-access-h4zlt" (OuterVolumeSpecName: "kube-api-access-h4zlt") pod "4ac3bec2-9bed-4cf2-8fce-12bbda05e441" (UID: "4ac3bec2-9bed-4cf2-8fce-12bbda05e441"). InnerVolumeSpecName "kube-api-access-h4zlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.337406 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.337764 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.337795 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-template-error\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.337843 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.337868 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.337907 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-audit-dir\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.337936 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwvg7\" (UniqueName: \"kubernetes.io/projected/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-kube-api-access-gwvg7\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.337959 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.337983 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-audit-policies\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338004 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-session\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338029 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338054 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338084 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-template-login\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338105 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338168 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338182 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaa7856-cb81-4a67-b4bb-13617aa84e31-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338195 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4zlt\" (UniqueName: \"kubernetes.io/projected/4ac3bec2-9bed-4cf2-8fce-12bbda05e441-kube-api-access-h4zlt\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338209 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j98gz\" (UniqueName: \"kubernetes.io/projected/ffaa7856-cb81-4a67-b4bb-13617aa84e31-kube-api-access-j98gz\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338317 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338458 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-audit-dir\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.338978 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-audit-policies\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.339277 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.339564 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.342276 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-session\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.342384 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.342577 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.342940 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-template-login\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.343141 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.343356 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.343870 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.348951 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-v4-0-config-user-template-error\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.352623 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwvg7\" (UniqueName: \"kubernetes.io/projected/1af8b89d-5426-40c8-a5c9-00bfb030d4e6-kube-api-access-gwvg7\") pod \"oauth-openshift-574b75df8-jvwgz\" (UID: \"1af8b89d-5426-40c8-a5c9-00bfb030d4e6\") " pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.488896 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.516752 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.516985 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.580584 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.615816 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.615868 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.661886 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85f6d5d975-gwkc5"] Jan 21 00:09:51 crc kubenswrapper[4873]: E0121 00:09:51.662494 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaa7856-cb81-4a67-b4bb-13617aa84e31" containerName="route-controller-manager" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.662515 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaa7856-cb81-4a67-b4bb-13617aa84e31" containerName="route-controller-manager" Jan 21 00:09:51 crc kubenswrapper[4873]: E0121 00:09:51.662572 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac3bec2-9bed-4cf2-8fce-12bbda05e441" containerName="controller-manager" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.662586 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac3bec2-9bed-4cf2-8fce-12bbda05e441" containerName="controller-manager" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.662794 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac3bec2-9bed-4cf2-8fce-12bbda05e441" containerName="controller-manager" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.662819 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffaa7856-cb81-4a67-b4bb-13617aa84e31" containerName="route-controller-manager" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.663431 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.666652 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk"] Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.679900 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.679960 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.679972 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85f6d5d975-gwkc5"] Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.689621 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk"] Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.744529 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-config\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.745107 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-serving-cert\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.744751 4873 generic.go:334] "Generic (PLEG): container finished" podID="ffaa7856-cb81-4a67-b4bb-13617aa84e31" containerID="77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe" exitCode=0 Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.744845 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.745240 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d85b9f62-3b47-4198-923b-912ea15f3761-proxy-ca-bundles\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.744776 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" event={"ID":"ffaa7856-cb81-4a67-b4bb-13617aa84e31","Type":"ContainerDied","Data":"77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe"} Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.745332 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d85b9f62-3b47-4198-923b-912ea15f3761-serving-cert\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.745324 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh" event={"ID":"ffaa7856-cb81-4a67-b4bb-13617aa84e31","Type":"ContainerDied","Data":"2803a651975a5edc4da91a81e7be7d0e6d50f334a0f6c916259a7323c3f94e6b"} Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.745368 4873 scope.go:117] "RemoveContainer" containerID="77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.745498 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv9g9\" (UniqueName: \"kubernetes.io/projected/d85b9f62-3b47-4198-923b-912ea15f3761-kube-api-access-gv9g9\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.745571 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhprn\" (UniqueName: \"kubernetes.io/projected/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-kube-api-access-fhprn\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.745616 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85b9f62-3b47-4198-923b-912ea15f3761-config\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.745647 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-client-ca\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.745667 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d85b9f62-3b47-4198-923b-912ea15f3761-client-ca\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.746829 4873 generic.go:334] "Generic (PLEG): container finished" podID="fa9a91c2-efee-4e59-acaa-c5f236e0f857" containerID="2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8" exitCode=0 Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.746929 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" event={"ID":"fa9a91c2-efee-4e59-acaa-c5f236e0f857","Type":"ContainerDied","Data":"2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8"} Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.746939 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.746962 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7d257" event={"ID":"fa9a91c2-efee-4e59-acaa-c5f236e0f857","Type":"ContainerDied","Data":"38c56e5f14c2215b5c39381a156b80c35cf6b4c6dcc1b342acef9240d586ae04"} Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.748432 4873 generic.go:334] "Generic (PLEG): container finished" podID="4ac3bec2-9bed-4cf2-8fce-12bbda05e441" containerID="a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a" exitCode=0 Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.748473 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" event={"ID":"4ac3bec2-9bed-4cf2-8fce-12bbda05e441","Type":"ContainerDied","Data":"a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a"} Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.748520 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" event={"ID":"4ac3bec2-9bed-4cf2-8fce-12bbda05e441","Type":"ContainerDied","Data":"1ca85aebfb8521f52ebbc2431340b63ad2bd69bc3192800e80d79684aa518920"} Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.748533 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.759731 4873 scope.go:117] "RemoveContainer" containerID="77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe" Jan 21 00:09:51 crc kubenswrapper[4873]: E0121 00:09:51.760101 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe\": container with ID starting with 77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe not found: ID does not exist" containerID="77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.760147 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe"} err="failed to get container status \"77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe\": rpc error: code = NotFound desc = could not find container \"77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe\": container with ID starting with 77d8dddedd2085577b5c457ed8ee1c3c9c98dcba154f32ab2869b1dbc375befe not found: ID does not exist" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.760174 4873 scope.go:117] "RemoveContainer" containerID="2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.777591 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh"] Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.779686 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8698846dc-ff7jh"] Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.787029 4873 scope.go:117] "RemoveContainer" containerID="2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8" Jan 21 00:09:51 crc kubenswrapper[4873]: E0121 00:09:51.787828 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8\": container with ID starting with 2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8 not found: ID does not exist" containerID="2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.787859 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8"} err="failed to get container status \"2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8\": rpc error: code = NotFound desc = could not find container \"2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8\": container with ID starting with 2fc4762c88ca610bd0deccd646d319b37f460d3e8bb264c50da0543389df37f8 not found: ID does not exist" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.787881 4873 scope.go:117] "RemoveContainer" containerID="a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.789777 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.794419 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z"] Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.802050 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6bb5f94d9-5bw4z"] Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.806726 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7d257"] Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.808127 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7d257"] Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.811795 4873 scope.go:117] "RemoveContainer" containerID="a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a" Jan 21 00:09:51 crc kubenswrapper[4873]: E0121 00:09:51.812284 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a\": container with ID starting with a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a not found: ID does not exist" containerID="a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.812336 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a"} err="failed to get container status \"a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a\": rpc error: code = NotFound desc = could not find container \"a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a\": container with ID starting with a42c950f1fc7cf2206ba12ddd5ee14f36985cb4b5ce60581946f133063fca05a not found: ID does not exist" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.813683 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.813973 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.814021 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.847103 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85b9f62-3b47-4198-923b-912ea15f3761-config\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.847165 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-client-ca\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.847204 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d85b9f62-3b47-4198-923b-912ea15f3761-client-ca\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.847225 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-config\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.847241 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-serving-cert\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.847262 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d85b9f62-3b47-4198-923b-912ea15f3761-proxy-ca-bundles\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.847283 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d85b9f62-3b47-4198-923b-912ea15f3761-serving-cert\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.847297 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv9g9\" (UniqueName: \"kubernetes.io/projected/d85b9f62-3b47-4198-923b-912ea15f3761-kube-api-access-gv9g9\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.847324 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhprn\" (UniqueName: \"kubernetes.io/projected/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-kube-api-access-fhprn\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.848364 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-client-ca\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.848670 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d85b9f62-3b47-4198-923b-912ea15f3761-client-ca\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.850245 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-config\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.850376 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85b9f62-3b47-4198-923b-912ea15f3761-config\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.851317 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d85b9f62-3b47-4198-923b-912ea15f3761-serving-cert\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.852319 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-serving-cert\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.853892 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d85b9f62-3b47-4198-923b-912ea15f3761-proxy-ca-bundles\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.861600 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.862944 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv9g9\" (UniqueName: \"kubernetes.io/projected/d85b9f62-3b47-4198-923b-912ea15f3761-kube-api-access-gv9g9\") pod \"controller-manager-85f6d5d975-gwkc5\" (UID: \"d85b9f62-3b47-4198-923b-912ea15f3761\") " pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.867849 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhprn\" (UniqueName: \"kubernetes.io/projected/50a7cfbb-3448-42b2-b707-94b3f9f5baf2-kube-api-access-fhprn\") pod \"route-controller-manager-b7c8797d5-pkplk\" (UID: \"50a7cfbb-3448-42b2-b707-94b3f9f5baf2\") " pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.932134 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-574b75df8-jvwgz"] Jan 21 00:09:51 crc kubenswrapper[4873]: W0121 00:09:51.936269 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1af8b89d_5426_40c8_a5c9_00bfb030d4e6.slice/crio-0ddf102b9b6202e14c264679d02506f3f708fa31937281a5b2cd7b212678f20d WatchSource:0}: Error finding container 0ddf102b9b6202e14c264679d02506f3f708fa31937281a5b2cd7b212678f20d: Status 404 returned error can't find the container with id 0ddf102b9b6202e14c264679d02506f3f708fa31937281a5b2cd7b212678f20d Jan 21 00:09:51 crc kubenswrapper[4873]: I0121 00:09:51.996591 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.006180 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.073423 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac3bec2-9bed-4cf2-8fce-12bbda05e441" path="/var/lib/kubelet/pods/4ac3bec2-9bed-4cf2-8fce-12bbda05e441/volumes" Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.074117 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa9a91c2-efee-4e59-acaa-c5f236e0f857" path="/var/lib/kubelet/pods/fa9a91c2-efee-4e59-acaa-c5f236e0f857/volumes" Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.074685 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffaa7856-cb81-4a67-b4bb-13617aa84e31" path="/var/lib/kubelet/pods/ffaa7856-cb81-4a67-b4bb-13617aa84e31/volumes" Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.414409 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk"] Jan 21 00:09:52 crc kubenswrapper[4873]: W0121 00:09:52.418375 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50a7cfbb_3448_42b2_b707_94b3f9f5baf2.slice/crio-8ee8cbc3f410fa8428e55d8c1f7cef8c5c234879f9dc9400aa561a4c59023761 WatchSource:0}: Error finding container 8ee8cbc3f410fa8428e55d8c1f7cef8c5c234879f9dc9400aa561a4c59023761: Status 404 returned error can't find the container with id 8ee8cbc3f410fa8428e55d8c1f7cef8c5c234879f9dc9400aa561a4c59023761 Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.453112 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85f6d5d975-gwkc5"] Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.756421 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" event={"ID":"d85b9f62-3b47-4198-923b-912ea15f3761","Type":"ContainerStarted","Data":"db7f357385b58c221ab4f070dcd644a3331ee2f50835ace5c1dfbc58fe36add9"} Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.759111 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" event={"ID":"1af8b89d-5426-40c8-a5c9-00bfb030d4e6","Type":"ContainerStarted","Data":"223aa791e857a0e7c568a412c52e1175ee977f7905695aef54826d0cc528525d"} Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.759136 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" event={"ID":"1af8b89d-5426-40c8-a5c9-00bfb030d4e6","Type":"ContainerStarted","Data":"0ddf102b9b6202e14c264679d02506f3f708fa31937281a5b2cd7b212678f20d"} Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.759531 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.767292 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" event={"ID":"50a7cfbb-3448-42b2-b707-94b3f9f5baf2","Type":"ContainerStarted","Data":"8ee8cbc3f410fa8428e55d8c1f7cef8c5c234879f9dc9400aa561a4c59023761"} Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.793262 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" podStartSLOduration=27.793244534 podStartE2EDuration="27.793244534s" podCreationTimestamp="2026-01-21 00:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:52.791607117 +0000 UTC m=+225.031474763" watchObservedRunningTime="2026-01-21 00:09:52.793244534 +0000 UTC m=+225.033112180" Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.825331 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.932367 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-574b75df8-jvwgz" Jan 21 00:09:52 crc kubenswrapper[4873]: I0121 00:09:52.982307 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjfd5"] Jan 21 00:09:53 crc kubenswrapper[4873]: I0121 00:09:53.206878 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:09:53 crc kubenswrapper[4873]: I0121 00:09:53.207113 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:09:53 crc kubenswrapper[4873]: I0121 00:09:53.249802 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:09:53 crc kubenswrapper[4873]: I0121 00:09:53.618696 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:09:53 crc kubenswrapper[4873]: I0121 00:09:53.618818 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:09:53 crc kubenswrapper[4873]: I0121 00:09:53.663528 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:09:53 crc kubenswrapper[4873]: I0121 00:09:53.778809 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qjfd5" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerName="registry-server" containerID="cri-o://cded240bd90d0eec75e383075d390db2aa74fb120cf1a28be65ef5b4fdead0d7" gracePeriod=2 Jan 21 00:09:53 crc kubenswrapper[4873]: I0121 00:09:53.840910 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:09:53 crc kubenswrapper[4873]: I0121 00:09:53.847054 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:09:54 crc kubenswrapper[4873]: I0121 00:09:54.379767 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbxjc"] Jan 21 00:09:54 crc kubenswrapper[4873]: I0121 00:09:54.438384 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:09:54 crc kubenswrapper[4873]: I0121 00:09:54.487716 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:09:54 crc kubenswrapper[4873]: I0121 00:09:54.785116 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbxjc" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerName="registry-server" containerID="cri-o://dabcf6ab0fc7e488489beeb82ccd9104289cfa52338d880c84bf34ba2c909c5a" gracePeriod=2 Jan 21 00:09:54 crc kubenswrapper[4873]: I0121 00:09:54.877709 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:09:54 crc kubenswrapper[4873]: I0121 00:09:54.950133 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.776409 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2txz"] Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.800987 4873 generic.go:334] "Generic (PLEG): container finished" podID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerID="cded240bd90d0eec75e383075d390db2aa74fb120cf1a28be65ef5b4fdead0d7" exitCode=0 Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.801024 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjfd5" event={"ID":"c2aa2871-6143-459b-9607-1fdde2f2f22c","Type":"ContainerDied","Data":"cded240bd90d0eec75e383075d390db2aa74fb120cf1a28be65ef5b4fdead0d7"} Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.802772 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" event={"ID":"50a7cfbb-3448-42b2-b707-94b3f9f5baf2","Type":"ContainerStarted","Data":"39c04ab00c048caa18d2dbd4960633a7680d4cd01f10a0fad07195a99b79718e"} Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.803040 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.804591 4873 generic.go:334] "Generic (PLEG): container finished" podID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerID="dabcf6ab0fc7e488489beeb82ccd9104289cfa52338d880c84bf34ba2c909c5a" exitCode=0 Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.804638 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbxjc" event={"ID":"c0fd5454-ac0d-4873-a5cc-d690883223a4","Type":"ContainerDied","Data":"dabcf6ab0fc7e488489beeb82ccd9104289cfa52338d880c84bf34ba2c909c5a"} Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.805828 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" event={"ID":"d85b9f62-3b47-4198-923b-912ea15f3761","Type":"ContainerStarted","Data":"8044216f18e093e493fc4ca4626ddd594bad972efefb55365a6b5ce6ee1f13db"} Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.806011 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n2txz" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerName="registry-server" containerID="cri-o://9999b359f66441b07812d733e48fd02a512e418c048b9797f0c6937a2561124b" gracePeriod=2 Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.808537 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" Jan 21 00:09:56 crc kubenswrapper[4873]: I0121 00:09:56.835051 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b7c8797d5-pkplk" podStartSLOduration=6.835026323 podStartE2EDuration="6.835026323s" podCreationTimestamp="2026-01-21 00:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:56.832689376 +0000 UTC m=+229.072557022" watchObservedRunningTime="2026-01-21 00:09:56.835026323 +0000 UTC m=+229.074893969" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.055785 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.121708 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz6cw\" (UniqueName: \"kubernetes.io/projected/c0fd5454-ac0d-4873-a5cc-d690883223a4-kube-api-access-cz6cw\") pod \"c0fd5454-ac0d-4873-a5cc-d690883223a4\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.121788 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-catalog-content\") pod \"c0fd5454-ac0d-4873-a5cc-d690883223a4\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.121854 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-utilities\") pod \"c0fd5454-ac0d-4873-a5cc-d690883223a4\" (UID: \"c0fd5454-ac0d-4873-a5cc-d690883223a4\") " Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.123366 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-utilities" (OuterVolumeSpecName: "utilities") pod "c0fd5454-ac0d-4873-a5cc-d690883223a4" (UID: "c0fd5454-ac0d-4873-a5cc-d690883223a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.126811 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fd5454-ac0d-4873-a5cc-d690883223a4-kube-api-access-cz6cw" (OuterVolumeSpecName: "kube-api-access-cz6cw") pod "c0fd5454-ac0d-4873-a5cc-d690883223a4" (UID: "c0fd5454-ac0d-4873-a5cc-d690883223a4"). InnerVolumeSpecName "kube-api-access-cz6cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.223079 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz6cw\" (UniqueName: \"kubernetes.io/projected/c0fd5454-ac0d-4873-a5cc-d690883223a4-kube-api-access-cz6cw\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.223112 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.678008 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0fd5454-ac0d-4873-a5cc-d690883223a4" (UID: "c0fd5454-ac0d-4873-a5cc-d690883223a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.729777 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fd5454-ac0d-4873-a5cc-d690883223a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.813643 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbxjc" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.813629 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbxjc" event={"ID":"c0fd5454-ac0d-4873-a5cc-d690883223a4","Type":"ContainerDied","Data":"d3813b9cb5a4d7a06a63c525b7331d0c90775cd816eaa8884193190cc88269de"} Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.813751 4873 scope.go:117] "RemoveContainer" containerID="dabcf6ab0fc7e488489beeb82ccd9104289cfa52338d880c84bf34ba2c909c5a" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.820746 4873 generic.go:334] "Generic (PLEG): container finished" podID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerID="9999b359f66441b07812d733e48fd02a512e418c048b9797f0c6937a2561124b" exitCode=0 Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.820793 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2txz" event={"ID":"53754e90-fc88-49e7-ad8e-50552d9bc151","Type":"ContainerDied","Data":"9999b359f66441b07812d733e48fd02a512e418c048b9797f0c6937a2561124b"} Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.837895 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" podStartSLOduration=7.837877755 podStartE2EDuration="7.837877755s" podCreationTimestamp="2026-01-21 00:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:09:57.83526599 +0000 UTC m=+230.075133646" watchObservedRunningTime="2026-01-21 00:09:57.837877755 +0000 UTC m=+230.077745401" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.857084 4873 scope.go:117] "RemoveContainer" containerID="ef5bd9f04718f58ae0cffcdc6996dd85e10016985a6afec09475eb65bbec795d" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.866172 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbxjc"] Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.869742 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbxjc"] Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.879926 4873 scope.go:117] "RemoveContainer" containerID="1f15c02cac71b277463451fd8ee9977ed936aecf484e1b6d91c11a21f0549fd3" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.904603 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.931879 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-catalog-content\") pod \"c2aa2871-6143-459b-9607-1fdde2f2f22c\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.931961 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx59p\" (UniqueName: \"kubernetes.io/projected/c2aa2871-6143-459b-9607-1fdde2f2f22c-kube-api-access-nx59p\") pod \"c2aa2871-6143-459b-9607-1fdde2f2f22c\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.932003 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-utilities\") pod \"c2aa2871-6143-459b-9607-1fdde2f2f22c\" (UID: \"c2aa2871-6143-459b-9607-1fdde2f2f22c\") " Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.935600 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-utilities" (OuterVolumeSpecName: "utilities") pod "c2aa2871-6143-459b-9607-1fdde2f2f22c" (UID: "c2aa2871-6143-459b-9607-1fdde2f2f22c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.938655 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2aa2871-6143-459b-9607-1fdde2f2f22c-kube-api-access-nx59p" (OuterVolumeSpecName: "kube-api-access-nx59p") pod "c2aa2871-6143-459b-9607-1fdde2f2f22c" (UID: "c2aa2871-6143-459b-9607-1fdde2f2f22c"). InnerVolumeSpecName "kube-api-access-nx59p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:57 crc kubenswrapper[4873]: I0121 00:09:57.989003 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2aa2871-6143-459b-9607-1fdde2f2f22c" (UID: "c2aa2871-6143-459b-9607-1fdde2f2f22c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.034453 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.034496 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx59p\" (UniqueName: \"kubernetes.io/projected/c2aa2871-6143-459b-9607-1fdde2f2f22c-kube-api-access-nx59p\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.034511 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2aa2871-6143-459b-9607-1fdde2f2f22c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.071063 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" path="/var/lib/kubelet/pods/c0fd5454-ac0d-4873-a5cc-d690883223a4/volumes" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.812642 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.828948 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2txz" event={"ID":"53754e90-fc88-49e7-ad8e-50552d9bc151","Type":"ContainerDied","Data":"62eb832f9ba75b1d5f12d39d962c5415577e02489a0dc1f28e284adf8e9ed33f"} Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.829053 4873 scope.go:117] "RemoveContainer" containerID="9999b359f66441b07812d733e48fd02a512e418c048b9797f0c6937a2561124b" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.829224 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2txz" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.841748 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjfd5" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.841765 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjfd5" event={"ID":"c2aa2871-6143-459b-9607-1fdde2f2f22c","Type":"ContainerDied","Data":"cd1e6c918404b00a676b1bca8903bbd867858d6a3fc504fd94a39ade4c75dbd0"} Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.844631 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-catalog-content\") pod \"53754e90-fc88-49e7-ad8e-50552d9bc151\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.844691 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szff9\" (UniqueName: \"kubernetes.io/projected/53754e90-fc88-49e7-ad8e-50552d9bc151-kube-api-access-szff9\") pod \"53754e90-fc88-49e7-ad8e-50552d9bc151\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.844719 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-utilities\") pod \"53754e90-fc88-49e7-ad8e-50552d9bc151\" (UID: \"53754e90-fc88-49e7-ad8e-50552d9bc151\") " Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.845636 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-utilities" (OuterVolumeSpecName: "utilities") pod "53754e90-fc88-49e7-ad8e-50552d9bc151" (UID: "53754e90-fc88-49e7-ad8e-50552d9bc151"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.848518 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53754e90-fc88-49e7-ad8e-50552d9bc151-kube-api-access-szff9" (OuterVolumeSpecName: "kube-api-access-szff9") pod "53754e90-fc88-49e7-ad8e-50552d9bc151" (UID: "53754e90-fc88-49e7-ad8e-50552d9bc151"). InnerVolumeSpecName "kube-api-access-szff9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.853751 4873 scope.go:117] "RemoveContainer" containerID="c32c5f58c598d88e121a507e4eab73137dd8227b9877b488b0d07b20ca21a6cb" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.872586 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53754e90-fc88-49e7-ad8e-50552d9bc151" (UID: "53754e90-fc88-49e7-ad8e-50552d9bc151"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.883863 4873 scope.go:117] "RemoveContainer" containerID="13fd1578f875694bff8e22b92bd592bcb6eed78891a1b953e82bbe38eaae6760" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.904604 4873 scope.go:117] "RemoveContainer" containerID="cded240bd90d0eec75e383075d390db2aa74fb120cf1a28be65ef5b4fdead0d7" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.908414 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjfd5"] Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.913828 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qjfd5"] Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.924412 4873 scope.go:117] "RemoveContainer" containerID="e42ab9fed38fcb233d2ade309a582bb7467fdad0c8149b18ba2cb8b3744537cb" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.944394 4873 scope.go:117] "RemoveContainer" containerID="27679cd03a138dd9c89b2224a6d729b08b3e1952bbd7d24538da5a526bb533af" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.946855 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.946968 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szff9\" (UniqueName: \"kubernetes.io/projected/53754e90-fc88-49e7-ad8e-50552d9bc151-kube-api-access-szff9\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:58 crc kubenswrapper[4873]: I0121 00:09:58.947067 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53754e90-fc88-49e7-ad8e-50552d9bc151-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.161013 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2txz"] Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.163644 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2txz"] Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.179730 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqslw"] Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.179954 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hqslw" podUID="f4273d27-1592-4728-8bd2-14888045fffb" containerName="registry-server" containerID="cri-o://bfd89a50d439dea40bea4e18e4db98e950ebc60ef86e3d87813bcef6abe128d8" gracePeriod=2 Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.851372 4873 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.851680 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713" gracePeriod=15 Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.851788 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0" gracePeriod=15 Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.851835 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac" gracePeriod=15 Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.851876 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e" gracePeriod=15 Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.851914 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c" gracePeriod=15 Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.861919 4873 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862159 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862176 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862188 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerName="extract-utilities" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862195 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerName="extract-utilities" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862208 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerName="extract-content" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862215 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerName="extract-content" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862225 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerName="extract-utilities" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862232 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerName="extract-utilities" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862242 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862249 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862257 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerName="registry-server" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862264 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerName="registry-server" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862274 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862282 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862291 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerName="registry-server" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862298 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerName="registry-server" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862310 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerName="registry-server" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862318 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerName="registry-server" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862329 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862337 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862347 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerName="extract-content" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862354 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerName="extract-content" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862368 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862374 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862384 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerName="extract-utilities" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862391 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerName="extract-utilities" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862400 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862407 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862417 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerName="extract-content" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862424 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerName="extract-content" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862542 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862576 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" containerName="registry-server" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862587 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862597 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862604 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862618 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862628 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" containerName="registry-server" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862638 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862648 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fd5454-ac0d-4873-a5cc-d690883223a4" containerName="registry-server" Jan 21 00:09:59 crc kubenswrapper[4873]: E0121 00:09:59.862770 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.862780 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.865205 4873 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.865906 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.869632 4873 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.909900 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.966420 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.966662 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.966780 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.966873 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.966975 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.967046 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.967155 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:09:59 crc kubenswrapper[4873]: I0121 00:09:59.967241 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.068407 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.068658 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.068767 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.068893 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.069013 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.069121 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.069241 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.069351 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.069490 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.068526 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.069711 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.069832 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.069956 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.070074 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.070183 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.070289 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.070476 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53754e90-fc88-49e7-ad8e-50552d9bc151" path="/var/lib/kubelet/pods/53754e90-fc88-49e7-ad8e-50552d9bc151/volumes" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.071472 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2aa2871-6143-459b-9607-1fdde2f2f22c" path="/var/lib/kubelet/pods/c2aa2871-6143-459b-9607-1fdde2f2f22c/volumes" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.211753 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:10:00 crc kubenswrapper[4873]: I0121 00:10:00.854424 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"311c0bc19c6876d401ce9032d84ff84a0e0ef5a4238962fef842cb3df0440c69"} Jan 21 00:10:01 crc kubenswrapper[4873]: E0121 00:10:01.530340 4873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c9679561a140e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:10:01.529521166 +0000 UTC m=+233.769388812,LastTimestamp:2026-01-21 00:10:01.529521166 +0000 UTC m=+233.769388812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.863153 4873 generic.go:334] "Generic (PLEG): container finished" podID="07e2005b-07a3-41c9-b5a8-45faebbedbff" containerID="a8b7e6a64007b0b9be519b0e9f352ccd93a50c64b9ed290c37695ea13329b463" exitCode=0 Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.863213 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07e2005b-07a3-41c9-b5a8-45faebbedbff","Type":"ContainerDied","Data":"a8b7e6a64007b0b9be519b0e9f352ccd93a50c64b9ed290c37695ea13329b463"} Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.864056 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.864425 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.866784 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.868209 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.868973 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0" exitCode=0 Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.869001 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac" exitCode=0 Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.869011 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e" exitCode=0 Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.869021 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c" exitCode=2 Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.869051 4873 scope.go:117] "RemoveContainer" containerID="e7461f4a6fcfebbf11a3ecc954e69d690e4454243b315659c7ca47346d744029" Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.871753 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4273d27-1592-4728-8bd2-14888045fffb" containerID="bfd89a50d439dea40bea4e18e4db98e950ebc60ef86e3d87813bcef6abe128d8" exitCode=0 Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.871799 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqslw" event={"ID":"f4273d27-1592-4728-8bd2-14888045fffb","Type":"ContainerDied","Data":"bfd89a50d439dea40bea4e18e4db98e950ebc60ef86e3d87813bcef6abe128d8"} Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.872934 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f"} Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.873702 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.873852 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:01 crc kubenswrapper[4873]: I0121 00:10:01.997160 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.002729 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.003473 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.004068 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.004583 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.385792 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.386992 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.387278 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4273d27-1592-4728-8bd2-14888045fffb" pod="openshift-marketplace/redhat-operators-hqslw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hqslw\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.387532 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.387806 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.499944 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd2f2\" (UniqueName: \"kubernetes.io/projected/f4273d27-1592-4728-8bd2-14888045fffb-kube-api-access-vd2f2\") pod \"f4273d27-1592-4728-8bd2-14888045fffb\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.500084 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-catalog-content\") pod \"f4273d27-1592-4728-8bd2-14888045fffb\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.500144 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-utilities\") pod \"f4273d27-1592-4728-8bd2-14888045fffb\" (UID: \"f4273d27-1592-4728-8bd2-14888045fffb\") " Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.501752 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-utilities" (OuterVolumeSpecName: "utilities") pod "f4273d27-1592-4728-8bd2-14888045fffb" (UID: "f4273d27-1592-4728-8bd2-14888045fffb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.505598 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4273d27-1592-4728-8bd2-14888045fffb-kube-api-access-vd2f2" (OuterVolumeSpecName: "kube-api-access-vd2f2") pod "f4273d27-1592-4728-8bd2-14888045fffb" (UID: "f4273d27-1592-4728-8bd2-14888045fffb"). InnerVolumeSpecName "kube-api-access-vd2f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.601287 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.601319 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd2f2\" (UniqueName: \"kubernetes.io/projected/f4273d27-1592-4728-8bd2-14888045fffb-kube-api-access-vd2f2\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.623853 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4273d27-1592-4728-8bd2-14888045fffb" (UID: "f4273d27-1592-4728-8bd2-14888045fffb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.703070 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4273d27-1592-4728-8bd2-14888045fffb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.881299 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.884872 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqslw" event={"ID":"f4273d27-1592-4728-8bd2-14888045fffb","Type":"ContainerDied","Data":"c51af64bb98a0f535d3b0e3cb65ad1d8d788fc926cb31e116e7447136653e371"} Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.884924 4873 scope.go:117] "RemoveContainer" containerID="bfd89a50d439dea40bea4e18e4db98e950ebc60ef86e3d87813bcef6abe128d8" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.885060 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqslw" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.885694 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4273d27-1592-4728-8bd2-14888045fffb" pod="openshift-marketplace/redhat-operators-hqslw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hqslw\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.885897 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.886110 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.886353 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.898947 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.900140 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4273d27-1592-4728-8bd2-14888045fffb" pod="openshift-marketplace/redhat-operators-hqslw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hqslw\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.901088 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.901355 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.903027 4873 scope.go:117] "RemoveContainer" containerID="93fdb7b1b0fac8e48ebfe56f482ef09cad576a2f49fd7da6b8e45b38efee3a5b" Jan 21 00:10:02 crc kubenswrapper[4873]: I0121 00:10:02.919713 4873 scope.go:117] "RemoveContainer" containerID="bb648a0f76631b0cd144e4a32fd789f7d0fe391a3b6fa720c26e13895bc5fabe" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.194100 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.195093 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.195358 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.195706 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4273d27-1592-4728-8bd2-14888045fffb" pod="openshift-marketplace/redhat-operators-hqslw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hqslw\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.196161 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.309351 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07e2005b-07a3-41c9-b5a8-45faebbedbff-kube-api-access\") pod \"07e2005b-07a3-41c9-b5a8-45faebbedbff\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.309415 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-var-lock\") pod \"07e2005b-07a3-41c9-b5a8-45faebbedbff\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.309432 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-kubelet-dir\") pod \"07e2005b-07a3-41c9-b5a8-45faebbedbff\" (UID: \"07e2005b-07a3-41c9-b5a8-45faebbedbff\") " Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.309619 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "07e2005b-07a3-41c9-b5a8-45faebbedbff" (UID: "07e2005b-07a3-41c9-b5a8-45faebbedbff"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.309701 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-var-lock" (OuterVolumeSpecName: "var-lock") pod "07e2005b-07a3-41c9-b5a8-45faebbedbff" (UID: "07e2005b-07a3-41c9-b5a8-45faebbedbff"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.309851 4873 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.315001 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e2005b-07a3-41c9-b5a8-45faebbedbff-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "07e2005b-07a3-41c9-b5a8-45faebbedbff" (UID: "07e2005b-07a3-41c9-b5a8-45faebbedbff"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.411042 4873 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07e2005b-07a3-41c9-b5a8-45faebbedbff-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.411080 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07e2005b-07a3-41c9-b5a8-45faebbedbff-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.895408 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.896844 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713" exitCode=0 Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.899762 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07e2005b-07a3-41c9-b5a8-45faebbedbff","Type":"ContainerDied","Data":"488069edb2ab25e89bcb5c4c15054ba89df659ccc49dfad9a5a456f363c4cc88"} Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.899802 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488069edb2ab25e89bcb5c4c15054ba89df659ccc49dfad9a5a456f363c4cc88" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.899896 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.933670 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.934399 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4273d27-1592-4728-8bd2-14888045fffb" pod="openshift-marketplace/redhat-operators-hqslw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hqslw\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.934976 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:03 crc kubenswrapper[4873]: I0121 00:10:03.935434 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.326693 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.327515 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.328113 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.328603 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.329017 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4273d27-1592-4728-8bd2-14888045fffb" pod="openshift-marketplace/redhat-operators-hqslw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hqslw\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.329355 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.329604 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.449275 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.449372 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.449389 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.449415 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.449443 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.449584 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.449685 4873 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.449701 4873 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.449712 4873 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:04 crc kubenswrapper[4873]: E0121 00:10:04.792697 4873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c9679561a140e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:10:01.529521166 +0000 UTC m=+233.769388812,LastTimestamp:2026-01-21 00:10:01.529521166 +0000 UTC m=+233.769388812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.908372 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.909075 4873 scope.go:117] "RemoveContainer" containerID="faf0494bb44e060efae3e4a8c922d8d6269e18a23fc77e78281267aceec7dbe0" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.909196 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.922432 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.923065 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.923333 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4273d27-1592-4728-8bd2-14888045fffb" pod="openshift-marketplace/redhat-operators-hqslw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hqslw\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.923736 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.924157 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.930894 4873 scope.go:117] "RemoveContainer" containerID="334134e4587bd5eba762483cfaf938914f63dbb3c8f2df0e6dc02efa24df66ac" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.942416 4873 scope.go:117] "RemoveContainer" containerID="3921bb220b11354bfcda54b460f3a56068c7b907d15712774c96a33d0658fe8e" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.958919 4873 scope.go:117] "RemoveContainer" containerID="6ec8821882958c0ca9b9c40bb8398272210e25471950848afecfa68ddc05cf9c" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.975002 4873 scope.go:117] "RemoveContainer" containerID="ec51548ace5406f2a20bcb96a4dbc8038aea7cdce4404096e75739ec4beeb713" Jan 21 00:10:04 crc kubenswrapper[4873]: I0121 00:10:04.992381 4873 scope.go:117] "RemoveContainer" containerID="74b59f82b14d8ed7d856e96af79a68e9e0ffd7615cba1b550aff6d074f1be309" Jan 21 00:10:06 crc kubenswrapper[4873]: I0121 00:10:06.070997 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 00:10:06 crc kubenswrapper[4873]: E0121 00:10:06.907498 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:06 crc kubenswrapper[4873]: E0121 00:10:06.908042 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:06 crc kubenswrapper[4873]: E0121 00:10:06.908322 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:06 crc kubenswrapper[4873]: E0121 00:10:06.908636 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:06 crc kubenswrapper[4873]: E0121 00:10:06.908905 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:06 crc kubenswrapper[4873]: E0121 00:10:06.908926 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:10:08 crc kubenswrapper[4873]: I0121 00:10:08.066391 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:08 crc kubenswrapper[4873]: I0121 00:10:08.067130 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4273d27-1592-4728-8bd2-14888045fffb" pod="openshift-marketplace/redhat-operators-hqslw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hqslw\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:08 crc kubenswrapper[4873]: I0121 00:10:08.067466 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:08 crc kubenswrapper[4873]: I0121 00:10:08.067753 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:10 crc kubenswrapper[4873]: E0121 00:10:10.916327 4873 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:10 crc kubenswrapper[4873]: E0121 00:10:10.916737 4873 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:10 crc kubenswrapper[4873]: E0121 00:10:10.917218 4873 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:10 crc kubenswrapper[4873]: E0121 00:10:10.917857 4873 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:10 crc kubenswrapper[4873]: E0121 00:10:10.918225 4873 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:10 crc kubenswrapper[4873]: I0121 00:10:10.918279 4873 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 00:10:10 crc kubenswrapper[4873]: E0121 00:10:10.918767 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="200ms" Jan 21 00:10:11 crc kubenswrapper[4873]: E0121 00:10:11.119933 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="400ms" Jan 21 00:10:11 crc kubenswrapper[4873]: E0121 00:10:11.521034 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="800ms" Jan 21 00:10:12 crc kubenswrapper[4873]: E0121 00:10:12.138118 4873 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" volumeName="registry-storage" Jan 21 00:10:12 crc kubenswrapper[4873]: E0121 00:10:12.322028 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="1.6s" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.063447 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.064168 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.064386 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4273d27-1592-4728-8bd2-14888045fffb" pod="openshift-marketplace/redhat-operators-hqslw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hqslw\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.064613 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.064779 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.082182 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.082232 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:13 crc kubenswrapper[4873]: E0121 00:10:13.082631 4873 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.083330 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:13 crc kubenswrapper[4873]: W0121 00:10:13.103445 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-733096d6c6de70e018394d93db42fef3462d10b984464f138a016e09451546ab WatchSource:0}: Error finding container 733096d6c6de70e018394d93db42fef3462d10b984464f138a016e09451546ab: Status 404 returned error can't find the container with id 733096d6c6de70e018394d93db42fef3462d10b984464f138a016e09451546ab Jan 21 00:10:13 crc kubenswrapper[4873]: E0121 00:10:13.923904 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="3.2s" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.968077 4873 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="896975f467d274a2479ef02ca9dfef2a91f0831c0fd8413b8326f7c37c13b216" exitCode=0 Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.968159 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"896975f467d274a2479ef02ca9dfef2a91f0831c0fd8413b8326f7c37c13b216"} Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.968201 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"733096d6c6de70e018394d93db42fef3462d10b984464f138a016e09451546ab"} Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.968639 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.968684 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:13 crc kubenswrapper[4873]: E0121 00:10:13.969621 4873 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.969623 4873 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.970185 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4273d27-1592-4728-8bd2-14888045fffb" pod="openshift-marketplace/redhat-operators-hqslw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hqslw\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.970738 4873 status_manager.go:851] "Failed to get status for pod" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:13 crc kubenswrapper[4873]: I0121 00:10:13.971205 4873 status_manager.go:851] "Failed to get status for pod" podUID="d85b9f62-3b47-4198-923b-912ea15f3761" pod="openshift-controller-manager/controller-manager-85f6d5d975-gwkc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85f6d5d975-gwkc5\": dial tcp 38.102.83.192:6443: connect: connection refused" Jan 21 00:10:14 crc kubenswrapper[4873]: E0121 00:10:14.794167 4873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c9679561a140e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:10:01.529521166 +0000 UTC m=+233.769388812,LastTimestamp:2026-01-21 00:10:01.529521166 +0000 UTC m=+233.769388812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:10:17 crc kubenswrapper[4873]: I0121 00:10:17.004501 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 00:10:17 crc kubenswrapper[4873]: I0121 00:10:17.005622 4873 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c" exitCode=1 Jan 21 00:10:17 crc kubenswrapper[4873]: I0121 00:10:17.005721 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c"} Jan 21 00:10:17 crc kubenswrapper[4873]: I0121 00:10:17.006256 4873 scope.go:117] "RemoveContainer" containerID="38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c" Jan 21 00:10:17 crc kubenswrapper[4873]: I0121 00:10:17.015062 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a7bb5283615c98aeb27f602a77c6d39c26522c521551a1dd7d687b23e612e661"} Jan 21 00:10:17 crc kubenswrapper[4873]: I0121 00:10:17.015104 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9a627449cf732a6e57bce271adb1326cc3cac55189cb21c4002f7d7386ca73dd"} Jan 21 00:10:17 crc kubenswrapper[4873]: I0121 00:10:17.015114 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f51263b363771333646731f7331a60d26a0a7ceeede885303cce1f4fbfa1cb50"} Jan 21 00:10:17 crc kubenswrapper[4873]: I0121 00:10:17.015122 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df47fd8faa386460d3c2b39a2dd5c6481e771ef67c2fe6bdc2e727ac215f70bb"} Jan 21 00:10:18 crc kubenswrapper[4873]: I0121 00:10:18.027066 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d672b652ad4e87ef2b5e8179167cbaf23d983e788c22509f04925466eaf905d4"} Jan 21 00:10:18 crc kubenswrapper[4873]: I0121 00:10:18.027474 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:18 crc kubenswrapper[4873]: I0121 00:10:18.027606 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:18 crc kubenswrapper[4873]: I0121 00:10:18.027617 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:18 crc kubenswrapper[4873]: I0121 00:10:18.032449 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 00:10:18 crc kubenswrapper[4873]: I0121 00:10:18.032526 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7a352bac7862e3e7fe3caee5a50517491baf694c47b8247987adb1319e2a739e"} Jan 21 00:10:18 crc kubenswrapper[4873]: I0121 00:10:18.084669 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:18 crc kubenswrapper[4873]: I0121 00:10:18.084735 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:18 crc kubenswrapper[4873]: I0121 00:10:18.092878 4873 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]log ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]etcd ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/priority-and-fairness-filter ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/start-apiextensions-informers ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/start-apiextensions-controllers ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/crd-informer-synced ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/start-system-namespaces-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 21 00:10:18 crc kubenswrapper[4873]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 21 00:10:18 crc kubenswrapper[4873]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/bootstrap-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/start-kube-aggregator-informers ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/apiservice-registration-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/apiservice-discovery-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]autoregister-completion ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/apiservice-openapi-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 21 00:10:18 crc kubenswrapper[4873]: livez check failed Jan 21 00:10:18 crc kubenswrapper[4873]: I0121 00:10:18.092950 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:10:21 crc kubenswrapper[4873]: I0121 00:10:21.979819 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:10:21 crc kubenswrapper[4873]: I0121 00:10:21.981729 4873 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 00:10:21 crc kubenswrapper[4873]: I0121 00:10:21.981795 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 00:10:22 crc kubenswrapper[4873]: I0121 00:10:22.427244 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:10:23 crc kubenswrapper[4873]: I0121 00:10:23.037497 4873 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:23 crc kubenswrapper[4873]: I0121 00:10:23.092465 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:23 crc kubenswrapper[4873]: I0121 00:10:23.096459 4873 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="efc02d53-389c-4296-98f0-e7d218516741" Jan 21 00:10:24 crc kubenswrapper[4873]: I0121 00:10:24.068431 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:24 crc kubenswrapper[4873]: I0121 00:10:24.068768 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:24 crc kubenswrapper[4873]: I0121 00:10:24.076350 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:25 crc kubenswrapper[4873]: I0121 00:10:25.075297 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:25 crc kubenswrapper[4873]: I0121 00:10:25.075344 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:26 crc kubenswrapper[4873]: I0121 00:10:26.080937 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:26 crc kubenswrapper[4873]: I0121 00:10:26.080993 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:26 crc kubenswrapper[4873]: I0121 00:10:26.088942 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:27 crc kubenswrapper[4873]: I0121 00:10:27.085821 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:27 crc kubenswrapper[4873]: I0121 00:10:27.085854 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="69c7f1d8-0551-484a-bea5-b688bb3e0793" Jan 21 00:10:28 crc kubenswrapper[4873]: I0121 00:10:28.111327 4873 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="efc02d53-389c-4296-98f0-e7d218516741" Jan 21 00:10:31 crc kubenswrapper[4873]: I0121 00:10:31.979387 4873 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 00:10:31 crc kubenswrapper[4873]: I0121 00:10:31.980036 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 00:10:32 crc kubenswrapper[4873]: I0121 00:10:32.283634 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 00:10:32 crc kubenswrapper[4873]: I0121 00:10:32.422812 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 00:10:32 crc kubenswrapper[4873]: I0121 00:10:32.825635 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 00:10:32 crc kubenswrapper[4873]: I0121 00:10:32.947015 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 00:10:33 crc kubenswrapper[4873]: I0121 00:10:33.059046 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 00:10:33 crc kubenswrapper[4873]: I0121 00:10:33.252185 4873 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 00:10:33 crc kubenswrapper[4873]: I0121 00:10:33.328652 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 00:10:33 crc kubenswrapper[4873]: I0121 00:10:33.329066 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 00:10:33 crc kubenswrapper[4873]: I0121 00:10:33.391388 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 00:10:33 crc kubenswrapper[4873]: I0121 00:10:33.584102 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 00:10:33 crc kubenswrapper[4873]: I0121 00:10:33.668055 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 00:10:34 crc kubenswrapper[4873]: I0121 00:10:34.263591 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 00:10:34 crc kubenswrapper[4873]: I0121 00:10:34.314965 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 00:10:34 crc kubenswrapper[4873]: I0121 00:10:34.420226 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 00:10:34 crc kubenswrapper[4873]: I0121 00:10:34.463608 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 00:10:34 crc kubenswrapper[4873]: I0121 00:10:34.606083 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 00:10:34 crc kubenswrapper[4873]: I0121 00:10:34.631580 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 00:10:34 crc kubenswrapper[4873]: I0121 00:10:34.753248 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.034466 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.251996 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.429077 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.468873 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.531748 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.684631 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.742163 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.803764 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.848285 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.884725 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.918424 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 00:10:35 crc kubenswrapper[4873]: I0121 00:10:35.941312 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.082870 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.134542 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.190965 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.215165 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.260081 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.433782 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.489972 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.639158 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.869155 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.928032 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.949355 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 00:10:36 crc kubenswrapper[4873]: I0121 00:10:36.971034 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.004819 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.113415 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.164685 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.242201 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.247008 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.304008 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.396999 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.434258 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.585785 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.730970 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.874486 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.922666 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.925146 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.940818 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 00:10:37 crc kubenswrapper[4873]: I0121 00:10:37.943989 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.017360 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.017958 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.031480 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.090017 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.111937 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.290516 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.383823 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.454924 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.514541 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.678033 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.723738 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.727693 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.830960 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.939578 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.942795 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 00:10:38 crc kubenswrapper[4873]: I0121 00:10:38.959745 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.044839 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.049347 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.114425 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.117486 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.138293 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.140024 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.224859 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.277866 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.282143 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.292811 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.298167 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.427007 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.500769 4873 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.509468 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.518506 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.556520 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.588078 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.600759 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.819521 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.838021 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.879785 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.936635 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.951180 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.970494 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.983644 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 00:10:39 crc kubenswrapper[4873]: I0121 00:10:39.987427 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.007300 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.170637 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.188713 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.312792 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.331879 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.347320 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.406203 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.460754 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.536148 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.540998 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.551322 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.608869 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.760521 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.783928 4873 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.793263 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.799925 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.833781 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.853367 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 00:10:40 crc kubenswrapper[4873]: I0121 00:10:40.996333 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.088801 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.093750 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.310405 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.348425 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.451601 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.496717 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.618463 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.691648 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.698455 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.741280 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.744933 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.774585 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.795668 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.899051 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.910209 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.940258 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.976073 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.979914 4873 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.979963 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.980014 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.980667 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"7a352bac7862e3e7fe3caee5a50517491baf694c47b8247987adb1319e2a739e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.980787 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://7a352bac7862e3e7fe3caee5a50517491baf694c47b8247987adb1319e2a739e" gracePeriod=30 Jan 21 00:10:41 crc kubenswrapper[4873]: I0121 00:10:41.985455 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.068793 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.237786 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.246076 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.291625 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.313314 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.387840 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.394810 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.400487 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.416744 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.426991 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.435383 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.463621 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.468764 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.538866 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.550777 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.561003 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.590378 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.599496 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.622100 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.665847 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.734517 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.802827 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.832944 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.889111 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.911103 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.912155 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 00:10:42 crc kubenswrapper[4873]: I0121 00:10:42.960796 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.003700 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.046357 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.050949 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.156388 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.287528 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.311211 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.392308 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.407682 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.502923 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.515699 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.564009 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.606943 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.611685 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.652702 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.662045 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.705289 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.705686 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.911932 4873 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 00:10:43 crc kubenswrapper[4873]: I0121 00:10:43.998264 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.036625 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.039749 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.121725 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.137885 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.213649 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.250766 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.269908 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.276292 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.283051 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.351092 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.362153 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.385242 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.397221 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.483898 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.652062 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.760445 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.785888 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.924413 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.963018 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 00:10:44 crc kubenswrapper[4873]: I0121 00:10:44.998537 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.036048 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.044274 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.047788 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.048027 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.081328 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.112967 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.224696 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.282677 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.299661 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.526036 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.526609 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.582382 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.683629 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.699509 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.738304 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.760115 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.861806 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.883485 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.924863 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 00:10:45 crc kubenswrapper[4873]: I0121 00:10:45.950306 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 00:10:46 crc kubenswrapper[4873]: I0121 00:10:46.068995 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 00:10:46 crc kubenswrapper[4873]: I0121 00:10:46.160164 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 00:10:46 crc kubenswrapper[4873]: I0121 00:10:46.327747 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 00:10:46 crc kubenswrapper[4873]: I0121 00:10:46.361983 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 00:10:46 crc kubenswrapper[4873]: I0121 00:10:46.381301 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 00:10:46 crc kubenswrapper[4873]: I0121 00:10:46.381383 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 00:10:46 crc kubenswrapper[4873]: I0121 00:10:46.565448 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 00:10:46 crc kubenswrapper[4873]: I0121 00:10:46.724353 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 00:10:46 crc kubenswrapper[4873]: I0121 00:10:46.878267 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 00:10:46 crc kubenswrapper[4873]: I0121 00:10:46.904574 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.041714 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.121285 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.211191 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.340177 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.371268 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.439187 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.555075 4873 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.655644 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.686997 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.720166 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 00:10:47 crc kubenswrapper[4873]: I0121 00:10:47.838419 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 00:10:48 crc kubenswrapper[4873]: I0121 00:10:48.072677 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 00:10:48 crc kubenswrapper[4873]: I0121 00:10:48.131075 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 00:10:48 crc kubenswrapper[4873]: I0121 00:10:48.166680 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 00:10:48 crc kubenswrapper[4873]: I0121 00:10:48.297039 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 00:10:48 crc kubenswrapper[4873]: I0121 00:10:48.373908 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 00:10:48 crc kubenswrapper[4873]: I0121 00:10:48.486217 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 00:10:48 crc kubenswrapper[4873]: I0121 00:10:48.784677 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 00:10:49 crc kubenswrapper[4873]: I0121 00:10:49.186135 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.907170 4873 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.907337 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=51.907296538 podStartE2EDuration="51.907296538s" podCreationTimestamp="2026-01-21 00:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:22.723031471 +0000 UTC m=+254.962899127" watchObservedRunningTime="2026-01-21 00:10:50.907296538 +0000 UTC m=+283.147164204" Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.911469 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hqslw","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.911526 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.911566 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6q5pv","openshift-marketplace/redhat-marketplace-hkdsj","openshift-marketplace/marketplace-operator-79b997595-bhsdr","openshift-marketplace/redhat-operators-dqn92","openshift-marketplace/community-operators-xv5c9"] Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.911795 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xv5c9" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerName="registry-server" containerID="cri-o://15b517abec7d6c15d7b1361bc1024e3dc7dcb3970a0d1bbcbdbe3936f5bfb21f" gracePeriod=30 Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.912177 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6q5pv" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="registry-server" containerID="cri-o://331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b" gracePeriod=30 Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.912611 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hkdsj" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerName="registry-server" containerID="cri-o://3f971b98b2b9344536efcf4dd8b96d5c67d429d665abd13a749924b7fdd65690" gracePeriod=30 Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.914299 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dqn92" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerName="registry-server" containerID="cri-o://c7304b2e21b6bc3558e09d266937c26f7826381acc9de63e6b6e68aab12a6288" gracePeriod=30 Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.914387 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" podUID="7064ffc6-970d-4592-a979-ed7fd110cbc8" containerName="marketplace-operator" containerID="cri-o://c5c752c1bbcfdc7a680c325720bbfa5744cd18a32522ab71b8c1987385cd1311" gracePeriod=30 Jan 21 00:10:50 crc kubenswrapper[4873]: I0121 00:10:50.949676 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.949655856 podStartE2EDuration="27.949655856s" podCreationTimestamp="2026-01-21 00:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:50.946794007 +0000 UTC m=+283.186661693" watchObservedRunningTime="2026-01-21 00:10:50.949655856 +0000 UTC m=+283.189523512" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.247762 4873 generic.go:334] "Generic (PLEG): container finished" podID="7064ffc6-970d-4592-a979-ed7fd110cbc8" containerID="c5c752c1bbcfdc7a680c325720bbfa5744cd18a32522ab71b8c1987385cd1311" exitCode=0 Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.247869 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" event={"ID":"7064ffc6-970d-4592-a979-ed7fd110cbc8","Type":"ContainerDied","Data":"c5c752c1bbcfdc7a680c325720bbfa5744cd18a32522ab71b8c1987385cd1311"} Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.250715 4873 generic.go:334] "Generic (PLEG): container finished" podID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerID="3f971b98b2b9344536efcf4dd8b96d5c67d429d665abd13a749924b7fdd65690" exitCode=0 Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.250826 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkdsj" event={"ID":"bca22ae0-3824-47b7-8520-f7b7f0f657ab","Type":"ContainerDied","Data":"3f971b98b2b9344536efcf4dd8b96d5c67d429d665abd13a749924b7fdd65690"} Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.253262 4873 generic.go:334] "Generic (PLEG): container finished" podID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerID="331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b" exitCode=0 Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.253297 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q5pv" event={"ID":"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc","Type":"ContainerDied","Data":"331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b"} Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.255386 4873 generic.go:334] "Generic (PLEG): container finished" podID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerID="c7304b2e21b6bc3558e09d266937c26f7826381acc9de63e6b6e68aab12a6288" exitCode=0 Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.255439 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqn92" event={"ID":"72d11f1d-5410-4a20-9b17-2f04a831a398","Type":"ContainerDied","Data":"c7304b2e21b6bc3558e09d266937c26f7826381acc9de63e6b6e68aab12a6288"} Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.259232 4873 generic.go:334] "Generic (PLEG): container finished" podID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerID="15b517abec7d6c15d7b1361bc1024e3dc7dcb3970a0d1bbcbdbe3936f5bfb21f" exitCode=0 Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.259260 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xv5c9" event={"ID":"e16d4483-cf0d-4977-bad2-ed10f6dde4c7","Type":"ContainerDied","Data":"15b517abec7d6c15d7b1361bc1024e3dc7dcb3970a0d1bbcbdbe3936f5bfb21f"} Jan 21 00:10:51 crc kubenswrapper[4873]: E0121 00:10:51.279019 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b is running failed: container process not found" containerID="331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 00:10:51 crc kubenswrapper[4873]: E0121 00:10:51.279423 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b is running failed: container process not found" containerID="331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 00:10:51 crc kubenswrapper[4873]: E0121 00:10:51.279730 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b is running failed: container process not found" containerID="331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 00:10:51 crc kubenswrapper[4873]: E0121 00:10:51.279779 4873 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6q5pv" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="registry-server" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.347390 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.460063 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-catalog-content\") pod \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.460144 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqlh7\" (UniqueName: \"kubernetes.io/projected/bca22ae0-3824-47b7-8520-f7b7f0f657ab-kube-api-access-tqlh7\") pod \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.460261 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-utilities\") pod \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\" (UID: \"bca22ae0-3824-47b7-8520-f7b7f0f657ab\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.461251 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-utilities" (OuterVolumeSpecName: "utilities") pod "bca22ae0-3824-47b7-8520-f7b7f0f657ab" (UID: "bca22ae0-3824-47b7-8520-f7b7f0f657ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.467857 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca22ae0-3824-47b7-8520-f7b7f0f657ab-kube-api-access-tqlh7" (OuterVolumeSpecName: "kube-api-access-tqlh7") pod "bca22ae0-3824-47b7-8520-f7b7f0f657ab" (UID: "bca22ae0-3824-47b7-8520-f7b7f0f657ab"). InnerVolumeSpecName "kube-api-access-tqlh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.484098 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bca22ae0-3824-47b7-8520-f7b7f0f657ab" (UID: "bca22ae0-3824-47b7-8520-f7b7f0f657ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.494043 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.498302 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.504399 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.523700 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.561711 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.561744 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca22ae0-3824-47b7-8520-f7b7f0f657ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.561756 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqlh7\" (UniqueName: \"kubernetes.io/projected/bca22ae0-3824-47b7-8520-f7b7f0f657ab-kube-api-access-tqlh7\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.662959 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx9hj\" (UniqueName: \"kubernetes.io/projected/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-kube-api-access-xx9hj\") pod \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663081 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-utilities\") pod \"72d11f1d-5410-4a20-9b17-2f04a831a398\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663119 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-utilities\") pod \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663173 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-catalog-content\") pod \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\" (UID: \"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663217 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwmt2\" (UniqueName: \"kubernetes.io/projected/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-kube-api-access-nwmt2\") pod \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663254 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsxh6\" (UniqueName: \"kubernetes.io/projected/72d11f1d-5410-4a20-9b17-2f04a831a398-kube-api-access-zsxh6\") pod \"72d11f1d-5410-4a20-9b17-2f04a831a398\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663300 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-trusted-ca\") pod \"7064ffc6-970d-4592-a979-ed7fd110cbc8\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663377 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcszd\" (UniqueName: \"kubernetes.io/projected/7064ffc6-970d-4592-a979-ed7fd110cbc8-kube-api-access-qcszd\") pod \"7064ffc6-970d-4592-a979-ed7fd110cbc8\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663432 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-operator-metrics\") pod \"7064ffc6-970d-4592-a979-ed7fd110cbc8\" (UID: \"7064ffc6-970d-4592-a979-ed7fd110cbc8\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663470 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-catalog-content\") pod \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663503 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-catalog-content\") pod \"72d11f1d-5410-4a20-9b17-2f04a831a398\" (UID: \"72d11f1d-5410-4a20-9b17-2f04a831a398\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663582 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-utilities\") pod \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\" (UID: \"e16d4483-cf0d-4977-bad2-ed10f6dde4c7\") " Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663790 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-utilities" (OuterVolumeSpecName: "utilities") pod "046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" (UID: "046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.663960 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.664020 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-utilities" (OuterVolumeSpecName: "utilities") pod "72d11f1d-5410-4a20-9b17-2f04a831a398" (UID: "72d11f1d-5410-4a20-9b17-2f04a831a398"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.664684 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7064ffc6-970d-4592-a979-ed7fd110cbc8" (UID: "7064ffc6-970d-4592-a979-ed7fd110cbc8"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.665121 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-utilities" (OuterVolumeSpecName: "utilities") pod "e16d4483-cf0d-4977-bad2-ed10f6dde4c7" (UID: "e16d4483-cf0d-4977-bad2-ed10f6dde4c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.666225 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-kube-api-access-xx9hj" (OuterVolumeSpecName: "kube-api-access-xx9hj") pod "046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" (UID: "046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc"). InnerVolumeSpecName "kube-api-access-xx9hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.666483 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d11f1d-5410-4a20-9b17-2f04a831a398-kube-api-access-zsxh6" (OuterVolumeSpecName: "kube-api-access-zsxh6") pod "72d11f1d-5410-4a20-9b17-2f04a831a398" (UID: "72d11f1d-5410-4a20-9b17-2f04a831a398"). InnerVolumeSpecName "kube-api-access-zsxh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.667698 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-kube-api-access-nwmt2" (OuterVolumeSpecName: "kube-api-access-nwmt2") pod "e16d4483-cf0d-4977-bad2-ed10f6dde4c7" (UID: "e16d4483-cf0d-4977-bad2-ed10f6dde4c7"). InnerVolumeSpecName "kube-api-access-nwmt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.667719 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7064ffc6-970d-4592-a979-ed7fd110cbc8-kube-api-access-qcszd" (OuterVolumeSpecName: "kube-api-access-qcszd") pod "7064ffc6-970d-4592-a979-ed7fd110cbc8" (UID: "7064ffc6-970d-4592-a979-ed7fd110cbc8"). InnerVolumeSpecName "kube-api-access-qcszd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.669573 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7064ffc6-970d-4592-a979-ed7fd110cbc8" (UID: "7064ffc6-970d-4592-a979-ed7fd110cbc8"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.720449 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e16d4483-cf0d-4977-bad2-ed10f6dde4c7" (UID: "e16d4483-cf0d-4977-bad2-ed10f6dde4c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.724679 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" (UID: "046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.765387 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx9hj\" (UniqueName: \"kubernetes.io/projected/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-kube-api-access-xx9hj\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.765423 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.765438 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.765449 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwmt2\" (UniqueName: \"kubernetes.io/projected/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-kube-api-access-nwmt2\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.765461 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsxh6\" (UniqueName: \"kubernetes.io/projected/72d11f1d-5410-4a20-9b17-2f04a831a398-kube-api-access-zsxh6\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.765472 4873 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.765484 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcszd\" (UniqueName: \"kubernetes.io/projected/7064ffc6-970d-4592-a979-ed7fd110cbc8-kube-api-access-qcszd\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.765495 4873 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7064ffc6-970d-4592-a979-ed7fd110cbc8-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.765562 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.765573 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16d4483-cf0d-4977-bad2-ed10f6dde4c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.814292 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72d11f1d-5410-4a20-9b17-2f04a831a398" (UID: "72d11f1d-5410-4a20-9b17-2f04a831a398"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:10:51 crc kubenswrapper[4873]: I0121 00:10:51.867151 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d11f1d-5410-4a20-9b17-2f04a831a398-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.074950 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4273d27-1592-4728-8bd2-14888045fffb" path="/var/lib/kubelet/pods/f4273d27-1592-4728-8bd2-14888045fffb/volumes" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.277138 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xv5c9" event={"ID":"e16d4483-cf0d-4977-bad2-ed10f6dde4c7","Type":"ContainerDied","Data":"a207b0e925650e1d58067a7c087d8514508c3a7589c193bfe53d82c25e12f7cd"} Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.277226 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xv5c9" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.277231 4873 scope.go:117] "RemoveContainer" containerID="15b517abec7d6c15d7b1361bc1024e3dc7dcb3970a0d1bbcbdbe3936f5bfb21f" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.281437 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" event={"ID":"7064ffc6-970d-4592-a979-ed7fd110cbc8","Type":"ContainerDied","Data":"34c426d007419ccd6b905584dd3f993f35029f0b6e0b6936b7f5b528c0c83468"} Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.281463 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bhsdr" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.286662 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkdsj" event={"ID":"bca22ae0-3824-47b7-8520-f7b7f0f657ab","Type":"ContainerDied","Data":"ab8b7e8da63bc309865644d22060fc071d13d151e898b6fc60f1f762d4c872f7"} Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.286849 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkdsj" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.292586 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q5pv" event={"ID":"046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc","Type":"ContainerDied","Data":"43aba0551423c93174b018b7d71b8b0d4fac6f6c9999d327b53292339de776bd"} Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.292771 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6q5pv" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.298162 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqn92" event={"ID":"72d11f1d-5410-4a20-9b17-2f04a831a398","Type":"ContainerDied","Data":"feb209bcb6b979b6a0208eff184b24fe7daeb6efd441a44fa6315d72b662266b"} Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.298253 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqn92" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.316824 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xv5c9"] Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.321873 4873 scope.go:117] "RemoveContainer" containerID="a8cfa958bcaca15e1d333d3c4deaed3d01f412beb565639753319ebcd50ffd3c" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.329935 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xv5c9"] Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.344509 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkdsj"] Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.361469 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkdsj"] Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.370822 4873 scope.go:117] "RemoveContainer" containerID="c4f61160c84c0acb94d2278286de576b8176311f2ba698ebaa91d9e4666ddd71" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.372390 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6q5pv"] Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.384293 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6q5pv"] Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.394218 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bhsdr"] Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.401081 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bhsdr"] Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.403708 4873 scope.go:117] "RemoveContainer" containerID="c5c752c1bbcfdc7a680c325720bbfa5744cd18a32522ab71b8c1987385cd1311" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.406903 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqn92"] Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.412396 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dqn92"] Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.416357 4873 scope.go:117] "RemoveContainer" containerID="3f971b98b2b9344536efcf4dd8b96d5c67d429d665abd13a749924b7fdd65690" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.426320 4873 scope.go:117] "RemoveContainer" containerID="f961316512fdfa3a4751438181ea9a35784a8291f772d953e026ba7f58418db2" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.437261 4873 scope.go:117] "RemoveContainer" containerID="9c56705048557fef862528366306c811c582d5f99eb95352a489a031bd2063a9" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.455476 4873 scope.go:117] "RemoveContainer" containerID="331d095de68cf243f6fa7f73da2583d799c6a7ea47cf5925ffdc5a0a4b81cf6b" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.469218 4873 scope.go:117] "RemoveContainer" containerID="cb519e2f5362ef9b9b73fa4f8dd3f9991d7ffee3430eef4489b1a9161a0316bf" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.488742 4873 scope.go:117] "RemoveContainer" containerID="4e2aedb62c213494362c2ba580d43cedd47d2e60441d87305f6ae1aefb9f96f5" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.509189 4873 scope.go:117] "RemoveContainer" containerID="c7304b2e21b6bc3558e09d266937c26f7826381acc9de63e6b6e68aab12a6288" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.521813 4873 scope.go:117] "RemoveContainer" containerID="aae83278df86999a8dc62b4f1bc21a81a90f43708d2485eef52c08aff16ae835" Jan 21 00:10:52 crc kubenswrapper[4873]: I0121 00:10:52.536942 4873 scope.go:117] "RemoveContainer" containerID="cff64af092d45f2995740af3715b791b7e9a6c27830eb333a043878790f1520a" Jan 21 00:10:54 crc kubenswrapper[4873]: I0121 00:10:54.071902 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" path="/var/lib/kubelet/pods/046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc/volumes" Jan 21 00:10:54 crc kubenswrapper[4873]: I0121 00:10:54.073819 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7064ffc6-970d-4592-a979-ed7fd110cbc8" path="/var/lib/kubelet/pods/7064ffc6-970d-4592-a979-ed7fd110cbc8/volumes" Jan 21 00:10:54 crc kubenswrapper[4873]: I0121 00:10:54.074911 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" path="/var/lib/kubelet/pods/72d11f1d-5410-4a20-9b17-2f04a831a398/volumes" Jan 21 00:10:54 crc kubenswrapper[4873]: I0121 00:10:54.077240 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" path="/var/lib/kubelet/pods/bca22ae0-3824-47b7-8520-f7b7f0f657ab/volumes" Jan 21 00:10:54 crc kubenswrapper[4873]: I0121 00:10:54.078764 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" path="/var/lib/kubelet/pods/e16d4483-cf0d-4977-bad2-ed10f6dde4c7/volumes" Jan 21 00:10:56 crc kubenswrapper[4873]: I0121 00:10:56.764536 4873 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 00:10:56 crc kubenswrapper[4873]: I0121 00:10:56.765032 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f" gracePeriod=5 Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.342736 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.343340 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.387704 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.387764 4873 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f" exitCode=137 Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.387814 4873 scope.go:117] "RemoveContainer" containerID="ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.387821 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.402795 4873 scope.go:117] "RemoveContainer" containerID="ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f" Jan 21 00:11:02 crc kubenswrapper[4873]: E0121 00:11:02.403255 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f\": container with ID starting with ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f not found: ID does not exist" containerID="ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.403296 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f"} err="failed to get container status \"ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f\": rpc error: code = NotFound desc = could not find container \"ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f\": container with ID starting with ced04af44a9bcbf7ea05ed813eca05773bf1bf7b4a6248ed2c11deb80a077c0f not found: ID does not exist" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.495354 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.495400 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.495461 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.495505 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.495543 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.495597 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.495618 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.495650 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.495684 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.496034 4873 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.496057 4873 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.496066 4873 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.496076 4873 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.503077 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:11:02 crc kubenswrapper[4873]: I0121 00:11:02.597194 4873 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:04 crc kubenswrapper[4873]: I0121 00:11:04.071053 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 00:11:04 crc kubenswrapper[4873]: I0121 00:11:04.071624 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 00:11:04 crc kubenswrapper[4873]: I0121 00:11:04.082111 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 00:11:04 crc kubenswrapper[4873]: I0121 00:11:04.082142 4873 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bfb1b185-ff77-4d18-b1a7-10a1f64bdc79" Jan 21 00:11:04 crc kubenswrapper[4873]: I0121 00:11:04.084609 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 00:11:04 crc kubenswrapper[4873]: I0121 00:11:04.084640 4873 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bfb1b185-ff77-4d18-b1a7-10a1f64bdc79" Jan 21 00:11:07 crc kubenswrapper[4873]: I0121 00:11:07.912102 4873 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 00:11:13 crc kubenswrapper[4873]: I0121 00:11:13.449308 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 21 00:11:13 crc kubenswrapper[4873]: I0121 00:11:13.452610 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 00:11:13 crc kubenswrapper[4873]: I0121 00:11:13.452662 4873 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7a352bac7862e3e7fe3caee5a50517491baf694c47b8247987adb1319e2a739e" exitCode=137 Jan 21 00:11:13 crc kubenswrapper[4873]: I0121 00:11:13.452695 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7a352bac7862e3e7fe3caee5a50517491baf694c47b8247987adb1319e2a739e"} Jan 21 00:11:13 crc kubenswrapper[4873]: I0121 00:11:13.452725 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ab5b1c7bfaaaafa0514fafdbabe3d16f80b3a0d36cdd5363bbf22e7b75c1ba9e"} Jan 21 00:11:13 crc kubenswrapper[4873]: I0121 00:11:13.452749 4873 scope.go:117] "RemoveContainer" containerID="38fcde6b9a93f4cbbc6a5260c20e26e1fb44f9cd99821bf01ce8532400ea9b0c" Jan 21 00:11:14 crc kubenswrapper[4873]: I0121 00:11:14.461362 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 21 00:11:21 crc kubenswrapper[4873]: I0121 00:11:21.978802 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:11:21 crc kubenswrapper[4873]: I0121 00:11:21.982729 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:11:22 crc kubenswrapper[4873]: I0121 00:11:22.427278 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:11:23 crc kubenswrapper[4873]: I0121 00:11:23.518079 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.249515 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hql4n"] Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.250861 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4273d27-1592-4728-8bd2-14888045fffb" containerName="extract-content" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.250945 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4273d27-1592-4728-8bd2-14888045fffb" containerName="extract-content" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.251021 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.251082 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.251147 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4273d27-1592-4728-8bd2-14888045fffb" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.251229 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4273d27-1592-4728-8bd2-14888045fffb" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.251305 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.251388 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.251450 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.251508 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.251589 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4273d27-1592-4728-8bd2-14888045fffb" containerName="extract-utilities" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.251656 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4273d27-1592-4728-8bd2-14888045fffb" containerName="extract-utilities" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.251724 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.251806 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.251876 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" containerName="installer" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.251940 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" containerName="installer" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.252008 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerName="extract-utilities" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.252065 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerName="extract-utilities" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.252124 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerName="extract-content" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.252187 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerName="extract-content" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.252245 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerName="extract-utilities" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.252319 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerName="extract-utilities" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.252388 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="extract-utilities" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.252445 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="extract-utilities" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.252505 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerName="extract-content" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.252579 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerName="extract-content" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.252650 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="extract-content" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.252711 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="extract-content" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.252778 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerName="extract-utilities" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.252835 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerName="extract-utilities" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.252895 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.252955 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.253017 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7064ffc6-970d-4592-a979-ed7fd110cbc8" containerName="marketplace-operator" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.253072 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7064ffc6-970d-4592-a979-ed7fd110cbc8" containerName="marketplace-operator" Jan 21 00:11:30 crc kubenswrapper[4873]: E0121 00:11:30.253133 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerName="extract-content" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.253188 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerName="extract-content" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.253326 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="046e1d5c-e4bf-4bc4-97b7-12fc894a3bbc" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.253435 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.253505 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e2005b-07a3-41c9-b5a8-45faebbedbff" containerName="installer" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.253581 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4273d27-1592-4728-8bd2-14888045fffb" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.253649 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="7064ffc6-970d-4592-a979-ed7fd110cbc8" containerName="marketplace-operator" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.253708 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16d4483-cf0d-4977-bad2-ed10f6dde4c7" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.253775 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca22ae0-3824-47b7-8520-f7b7f0f657ab" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.253842 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d11f1d-5410-4a20-9b17-2f04a831a398" containerName="registry-server" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.254300 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.258317 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.258797 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.258853 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.263113 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.263383 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.267293 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hql4n"] Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.377960 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/864ccb8e-f89f-49d9-985a-2d845b3690bf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hql4n\" (UID: \"864ccb8e-f89f-49d9-985a-2d845b3690bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.378020 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b24m2\" (UniqueName: \"kubernetes.io/projected/864ccb8e-f89f-49d9-985a-2d845b3690bf-kube-api-access-b24m2\") pod \"marketplace-operator-79b997595-hql4n\" (UID: \"864ccb8e-f89f-49d9-985a-2d845b3690bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.378060 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/864ccb8e-f89f-49d9-985a-2d845b3690bf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hql4n\" (UID: \"864ccb8e-f89f-49d9-985a-2d845b3690bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.479671 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/864ccb8e-f89f-49d9-985a-2d845b3690bf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hql4n\" (UID: \"864ccb8e-f89f-49d9-985a-2d845b3690bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.479737 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b24m2\" (UniqueName: \"kubernetes.io/projected/864ccb8e-f89f-49d9-985a-2d845b3690bf-kube-api-access-b24m2\") pod \"marketplace-operator-79b997595-hql4n\" (UID: \"864ccb8e-f89f-49d9-985a-2d845b3690bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.479780 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/864ccb8e-f89f-49d9-985a-2d845b3690bf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hql4n\" (UID: \"864ccb8e-f89f-49d9-985a-2d845b3690bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.481100 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/864ccb8e-f89f-49d9-985a-2d845b3690bf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hql4n\" (UID: \"864ccb8e-f89f-49d9-985a-2d845b3690bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.495136 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/864ccb8e-f89f-49d9-985a-2d845b3690bf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hql4n\" (UID: \"864ccb8e-f89f-49d9-985a-2d845b3690bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.499994 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b24m2\" (UniqueName: \"kubernetes.io/projected/864ccb8e-f89f-49d9-985a-2d845b3690bf-kube-api-access-b24m2\") pod \"marketplace-operator-79b997595-hql4n\" (UID: \"864ccb8e-f89f-49d9-985a-2d845b3690bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.579052 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:30 crc kubenswrapper[4873]: I0121 00:11:30.977143 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hql4n"] Jan 21 00:11:30 crc kubenswrapper[4873]: W0121 00:11:30.985049 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod864ccb8e_f89f_49d9_985a_2d845b3690bf.slice/crio-f479fd46766f587d595e1427820372f9fc11d2dc968f6e9013a0c193ad4d97a8 WatchSource:0}: Error finding container f479fd46766f587d595e1427820372f9fc11d2dc968f6e9013a0c193ad4d97a8: Status 404 returned error can't find the container with id f479fd46766f587d595e1427820372f9fc11d2dc968f6e9013a0c193ad4d97a8 Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.376909 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-28d2j"] Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.378333 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.380363 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.389481 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28d2j"] Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.521568 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g7n6\" (UniqueName: \"kubernetes.io/projected/3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a-kube-api-access-8g7n6\") pod \"redhat-operators-28d2j\" (UID: \"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a\") " pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.521649 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a-utilities\") pod \"redhat-operators-28d2j\" (UID: \"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a\") " pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.521835 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a-catalog-content\") pod \"redhat-operators-28d2j\" (UID: \"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a\") " pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.557809 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" event={"ID":"864ccb8e-f89f-49d9-985a-2d845b3690bf","Type":"ContainerStarted","Data":"f3b39c1f5a73c36c8a437cc179b2cbeb75c3bf8639451f0fd59debc7e3ed7b8a"} Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.557857 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" event={"ID":"864ccb8e-f89f-49d9-985a-2d845b3690bf","Type":"ContainerStarted","Data":"f479fd46766f587d595e1427820372f9fc11d2dc968f6e9013a0c193ad4d97a8"} Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.558376 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.562945 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.576427 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6msps"] Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.578217 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hql4n" podStartSLOduration=1.578200042 podStartE2EDuration="1.578200042s" podCreationTimestamp="2026-01-21 00:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:31.576429634 +0000 UTC m=+323.816297280" watchObservedRunningTime="2026-01-21 00:11:31.578200042 +0000 UTC m=+323.818067698" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.581257 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.583414 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.607663 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6msps"] Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.623227 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a-utilities\") pod \"redhat-operators-28d2j\" (UID: \"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a\") " pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.623671 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a-catalog-content\") pod \"redhat-operators-28d2j\" (UID: \"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a\") " pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.623892 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g7n6\" (UniqueName: \"kubernetes.io/projected/3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a-kube-api-access-8g7n6\") pod \"redhat-operators-28d2j\" (UID: \"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a\") " pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.624268 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a-utilities\") pod \"redhat-operators-28d2j\" (UID: \"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a\") " pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.624882 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a-catalog-content\") pod \"redhat-operators-28d2j\" (UID: \"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a\") " pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.647488 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g7n6\" (UniqueName: \"kubernetes.io/projected/3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a-kube-api-access-8g7n6\") pod \"redhat-operators-28d2j\" (UID: \"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a\") " pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.725448 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf3181e-73b7-4944-b7d2-e4970cb1b2b5-utilities\") pod \"certified-operators-6msps\" (UID: \"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5\") " pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.725986 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf3181e-73b7-4944-b7d2-e4970cb1b2b5-catalog-content\") pod \"certified-operators-6msps\" (UID: \"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5\") " pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.726028 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9sn5\" (UniqueName: \"kubernetes.io/projected/bbf3181e-73b7-4944-b7d2-e4970cb1b2b5-kube-api-access-f9sn5\") pod \"certified-operators-6msps\" (UID: \"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5\") " pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.733259 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.827070 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf3181e-73b7-4944-b7d2-e4970cb1b2b5-utilities\") pod \"certified-operators-6msps\" (UID: \"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5\") " pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.827351 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf3181e-73b7-4944-b7d2-e4970cb1b2b5-catalog-content\") pod \"certified-operators-6msps\" (UID: \"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5\") " pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.827398 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9sn5\" (UniqueName: \"kubernetes.io/projected/bbf3181e-73b7-4944-b7d2-e4970cb1b2b5-kube-api-access-f9sn5\") pod \"certified-operators-6msps\" (UID: \"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5\") " pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.828133 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf3181e-73b7-4944-b7d2-e4970cb1b2b5-catalog-content\") pod \"certified-operators-6msps\" (UID: \"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5\") " pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.828219 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf3181e-73b7-4944-b7d2-e4970cb1b2b5-utilities\") pod \"certified-operators-6msps\" (UID: \"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5\") " pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.849411 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9sn5\" (UniqueName: \"kubernetes.io/projected/bbf3181e-73b7-4944-b7d2-e4970cb1b2b5-kube-api-access-f9sn5\") pod \"certified-operators-6msps\" (UID: \"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5\") " pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:31 crc kubenswrapper[4873]: I0121 00:11:31.909795 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:32 crc kubenswrapper[4873]: I0121 00:11:32.087337 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6msps"] Jan 21 00:11:32 crc kubenswrapper[4873]: W0121 00:11:32.092943 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbf3181e_73b7_4944_b7d2_e4970cb1b2b5.slice/crio-a8b924977bae6e73aafb5bb047e867c09dead1ac3a1c2a7bed8487833d16e745 WatchSource:0}: Error finding container a8b924977bae6e73aafb5bb047e867c09dead1ac3a1c2a7bed8487833d16e745: Status 404 returned error can't find the container with id a8b924977bae6e73aafb5bb047e867c09dead1ac3a1c2a7bed8487833d16e745 Jan 21 00:11:32 crc kubenswrapper[4873]: I0121 00:11:32.119155 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28d2j"] Jan 21 00:11:32 crc kubenswrapper[4873]: W0121 00:11:32.124690 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c362d6b_3be3_4cc3_a71f_b9eb64e8e93a.slice/crio-7803e21025b1a9425e4daf7652de536a3b044fb2fde753540bf68be336b62a9d WatchSource:0}: Error finding container 7803e21025b1a9425e4daf7652de536a3b044fb2fde753540bf68be336b62a9d: Status 404 returned error can't find the container with id 7803e21025b1a9425e4daf7652de536a3b044fb2fde753540bf68be336b62a9d Jan 21 00:11:32 crc kubenswrapper[4873]: I0121 00:11:32.563729 4873 generic.go:334] "Generic (PLEG): container finished" podID="3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a" containerID="3e5beec8711a3948668dc1d832bdb856799665ab1f12cd73e81dd129a312be90" exitCode=0 Jan 21 00:11:32 crc kubenswrapper[4873]: I0121 00:11:32.563777 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28d2j" event={"ID":"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a","Type":"ContainerDied","Data":"3e5beec8711a3948668dc1d832bdb856799665ab1f12cd73e81dd129a312be90"} Jan 21 00:11:32 crc kubenswrapper[4873]: I0121 00:11:32.564331 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28d2j" event={"ID":"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a","Type":"ContainerStarted","Data":"7803e21025b1a9425e4daf7652de536a3b044fb2fde753540bf68be336b62a9d"} Jan 21 00:11:32 crc kubenswrapper[4873]: I0121 00:11:32.565667 4873 generic.go:334] "Generic (PLEG): container finished" podID="bbf3181e-73b7-4944-b7d2-e4970cb1b2b5" containerID="a31dce97aea363f4b361e45489a62717da689c15dcc65c0c99af01fe7e251127" exitCode=0 Jan 21 00:11:32 crc kubenswrapper[4873]: I0121 00:11:32.565744 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6msps" event={"ID":"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5","Type":"ContainerDied","Data":"a31dce97aea363f4b361e45489a62717da689c15dcc65c0c99af01fe7e251127"} Jan 21 00:11:32 crc kubenswrapper[4873]: I0121 00:11:32.565797 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6msps" event={"ID":"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5","Type":"ContainerStarted","Data":"a8b924977bae6e73aafb5bb047e867c09dead1ac3a1c2a7bed8487833d16e745"} Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.574096 4873 generic.go:334] "Generic (PLEG): container finished" podID="bbf3181e-73b7-4944-b7d2-e4970cb1b2b5" containerID="174474018ec901869b4d4115ccd866651c525e7c103b129f73643aa15c41f4bc" exitCode=0 Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.574191 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6msps" event={"ID":"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5","Type":"ContainerDied","Data":"174474018ec901869b4d4115ccd866651c525e7c103b129f73643aa15c41f4bc"} Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.774242 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8dtqn"] Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.775332 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.780272 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.791012 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8dtqn"] Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.852586 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz7lm\" (UniqueName: \"kubernetes.io/projected/3907c072-f5a0-44fa-9d7c-4a329a37863e-kube-api-access-pz7lm\") pod \"community-operators-8dtqn\" (UID: \"3907c072-f5a0-44fa-9d7c-4a329a37863e\") " pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.852667 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3907c072-f5a0-44fa-9d7c-4a329a37863e-utilities\") pod \"community-operators-8dtqn\" (UID: \"3907c072-f5a0-44fa-9d7c-4a329a37863e\") " pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.852708 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3907c072-f5a0-44fa-9d7c-4a329a37863e-catalog-content\") pod \"community-operators-8dtqn\" (UID: \"3907c072-f5a0-44fa-9d7c-4a329a37863e\") " pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.953263 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz7lm\" (UniqueName: \"kubernetes.io/projected/3907c072-f5a0-44fa-9d7c-4a329a37863e-kube-api-access-pz7lm\") pod \"community-operators-8dtqn\" (UID: \"3907c072-f5a0-44fa-9d7c-4a329a37863e\") " pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.953356 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3907c072-f5a0-44fa-9d7c-4a329a37863e-utilities\") pod \"community-operators-8dtqn\" (UID: \"3907c072-f5a0-44fa-9d7c-4a329a37863e\") " pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.953396 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3907c072-f5a0-44fa-9d7c-4a329a37863e-catalog-content\") pod \"community-operators-8dtqn\" (UID: \"3907c072-f5a0-44fa-9d7c-4a329a37863e\") " pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.953896 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3907c072-f5a0-44fa-9d7c-4a329a37863e-utilities\") pod \"community-operators-8dtqn\" (UID: \"3907c072-f5a0-44fa-9d7c-4a329a37863e\") " pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.954034 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3907c072-f5a0-44fa-9d7c-4a329a37863e-catalog-content\") pod \"community-operators-8dtqn\" (UID: \"3907c072-f5a0-44fa-9d7c-4a329a37863e\") " pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.978192 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz7lm\" (UniqueName: \"kubernetes.io/projected/3907c072-f5a0-44fa-9d7c-4a329a37863e-kube-api-access-pz7lm\") pod \"community-operators-8dtqn\" (UID: \"3907c072-f5a0-44fa-9d7c-4a329a37863e\") " pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.984330 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l4p44"] Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.986453 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.989842 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 00:11:33 crc kubenswrapper[4873]: I0121 00:11:33.998992 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4p44"] Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.054330 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-utilities\") pod \"redhat-marketplace-l4p44\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.054389 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-catalog-content\") pod \"redhat-marketplace-l4p44\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.054421 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lskk4\" (UniqueName: \"kubernetes.io/projected/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-kube-api-access-lskk4\") pod \"redhat-marketplace-l4p44\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.096040 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.155688 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-utilities\") pod \"redhat-marketplace-l4p44\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.156223 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-catalog-content\") pod \"redhat-marketplace-l4p44\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.156252 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lskk4\" (UniqueName: \"kubernetes.io/projected/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-kube-api-access-lskk4\") pod \"redhat-marketplace-l4p44\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.156302 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-utilities\") pod \"redhat-marketplace-l4p44\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.156628 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-catalog-content\") pod \"redhat-marketplace-l4p44\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.179532 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lskk4\" (UniqueName: \"kubernetes.io/projected/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-kube-api-access-lskk4\") pod \"redhat-marketplace-l4p44\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.328471 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.567331 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8dtqn"] Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.585309 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6msps" event={"ID":"bbf3181e-73b7-4944-b7d2-e4970cb1b2b5","Type":"ContainerStarted","Data":"f5353d30790815d9898765536aa5e7e1eed48dd48404abc4cd197728ffbf7a8f"} Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.588421 4873 generic.go:334] "Generic (PLEG): container finished" podID="3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a" containerID="5b4fd262520e900bb21b4406c7f718a3d5282b914c21e5aa5104123691b0130f" exitCode=0 Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.588469 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28d2j" event={"ID":"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a","Type":"ContainerDied","Data":"5b4fd262520e900bb21b4406c7f718a3d5282b914c21e5aa5104123691b0130f"} Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.618342 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6msps" podStartSLOduration=2.087873151 podStartE2EDuration="3.618323272s" podCreationTimestamp="2026-01-21 00:11:31 +0000 UTC" firstStartedPulling="2026-01-21 00:11:32.566842037 +0000 UTC m=+324.806709683" lastFinishedPulling="2026-01-21 00:11:34.097292158 +0000 UTC m=+326.337159804" observedRunningTime="2026-01-21 00:11:34.614409204 +0000 UTC m=+326.854276850" watchObservedRunningTime="2026-01-21 00:11:34.618323272 +0000 UTC m=+326.858190928" Jan 21 00:11:34 crc kubenswrapper[4873]: I0121 00:11:34.696902 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4p44"] Jan 21 00:11:34 crc kubenswrapper[4873]: W0121 00:11:34.704915 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1486aca_e3e4_4bd3_9048_89fd1fd0aef6.slice/crio-68440c6af5d516fe7973fa010e49a86f55e68a5ef1fcd7d31e9e052815f7a9fd WatchSource:0}: Error finding container 68440c6af5d516fe7973fa010e49a86f55e68a5ef1fcd7d31e9e052815f7a9fd: Status 404 returned error can't find the container with id 68440c6af5d516fe7973fa010e49a86f55e68a5ef1fcd7d31e9e052815f7a9fd Jan 21 00:11:35 crc kubenswrapper[4873]: I0121 00:11:35.595601 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28d2j" event={"ID":"3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a","Type":"ContainerStarted","Data":"2300b7058faba58cddd6c1674b3ec042f4a6842569ac4391247c569c13f34988"} Jan 21 00:11:35 crc kubenswrapper[4873]: I0121 00:11:35.597859 4873 generic.go:334] "Generic (PLEG): container finished" podID="3907c072-f5a0-44fa-9d7c-4a329a37863e" containerID="dc78c23e39b4b7068ff25e8ae8c462a436737d0dc70366f068a30f74eebd6f02" exitCode=0 Jan 21 00:11:35 crc kubenswrapper[4873]: I0121 00:11:35.597931 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8dtqn" event={"ID":"3907c072-f5a0-44fa-9d7c-4a329a37863e","Type":"ContainerDied","Data":"dc78c23e39b4b7068ff25e8ae8c462a436737d0dc70366f068a30f74eebd6f02"} Jan 21 00:11:35 crc kubenswrapper[4873]: I0121 00:11:35.597960 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8dtqn" event={"ID":"3907c072-f5a0-44fa-9d7c-4a329a37863e","Type":"ContainerStarted","Data":"45521e37a6c12fe43fd766ea6cf59e7942be5b796e56676b78172a64262aa1af"} Jan 21 00:11:35 crc kubenswrapper[4873]: I0121 00:11:35.600654 4873 generic.go:334] "Generic (PLEG): container finished" podID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerID="04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9" exitCode=0 Jan 21 00:11:35 crc kubenswrapper[4873]: I0121 00:11:35.600734 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4p44" event={"ID":"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6","Type":"ContainerDied","Data":"04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9"} Jan 21 00:11:35 crc kubenswrapper[4873]: I0121 00:11:35.600763 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4p44" event={"ID":"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6","Type":"ContainerStarted","Data":"68440c6af5d516fe7973fa010e49a86f55e68a5ef1fcd7d31e9e052815f7a9fd"} Jan 21 00:11:35 crc kubenswrapper[4873]: I0121 00:11:35.624081 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-28d2j" podStartSLOduration=2.141254022 podStartE2EDuration="4.624064387s" podCreationTimestamp="2026-01-21 00:11:31 +0000 UTC" firstStartedPulling="2026-01-21 00:11:32.565270133 +0000 UTC m=+324.805137779" lastFinishedPulling="2026-01-21 00:11:35.048080478 +0000 UTC m=+327.287948144" observedRunningTime="2026-01-21 00:11:35.618126264 +0000 UTC m=+327.857993950" watchObservedRunningTime="2026-01-21 00:11:35.624064387 +0000 UTC m=+327.863932033" Jan 21 00:11:36 crc kubenswrapper[4873]: I0121 00:11:36.605811 4873 generic.go:334] "Generic (PLEG): container finished" podID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerID="861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d" exitCode=0 Jan 21 00:11:36 crc kubenswrapper[4873]: I0121 00:11:36.605869 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4p44" event={"ID":"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6","Type":"ContainerDied","Data":"861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d"} Jan 21 00:11:36 crc kubenswrapper[4873]: I0121 00:11:36.608998 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8dtqn" event={"ID":"3907c072-f5a0-44fa-9d7c-4a329a37863e","Type":"ContainerStarted","Data":"3be7b235046f41ee4791e43d8652a5a1fbe63019aad9cf39585f5c1343c94569"} Jan 21 00:11:37 crc kubenswrapper[4873]: I0121 00:11:37.618315 4873 generic.go:334] "Generic (PLEG): container finished" podID="3907c072-f5a0-44fa-9d7c-4a329a37863e" containerID="3be7b235046f41ee4791e43d8652a5a1fbe63019aad9cf39585f5c1343c94569" exitCode=0 Jan 21 00:11:37 crc kubenswrapper[4873]: I0121 00:11:37.618426 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8dtqn" event={"ID":"3907c072-f5a0-44fa-9d7c-4a329a37863e","Type":"ContainerDied","Data":"3be7b235046f41ee4791e43d8652a5a1fbe63019aad9cf39585f5c1343c94569"} Jan 21 00:11:37 crc kubenswrapper[4873]: I0121 00:11:37.622151 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4p44" event={"ID":"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6","Type":"ContainerStarted","Data":"5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d"} Jan 21 00:11:37 crc kubenswrapper[4873]: I0121 00:11:37.670906 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l4p44" podStartSLOduration=3.169747502 podStartE2EDuration="4.670877915s" podCreationTimestamp="2026-01-21 00:11:33 +0000 UTC" firstStartedPulling="2026-01-21 00:11:35.602017319 +0000 UTC m=+327.841884975" lastFinishedPulling="2026-01-21 00:11:37.103147742 +0000 UTC m=+329.343015388" observedRunningTime="2026-01-21 00:11:37.66885168 +0000 UTC m=+329.908719336" watchObservedRunningTime="2026-01-21 00:11:37.670877915 +0000 UTC m=+329.910745611" Jan 21 00:11:38 crc kubenswrapper[4873]: I0121 00:11:38.627398 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8dtqn" event={"ID":"3907c072-f5a0-44fa-9d7c-4a329a37863e","Type":"ContainerStarted","Data":"d7f5b893990359ba965acb39dea735184343cd5b8338938b55a20366edecb1db"} Jan 21 00:11:38 crc kubenswrapper[4873]: I0121 00:11:38.649162 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8dtqn" podStartSLOduration=3.171064296 podStartE2EDuration="5.649146403s" podCreationTimestamp="2026-01-21 00:11:33 +0000 UTC" firstStartedPulling="2026-01-21 00:11:35.599970663 +0000 UTC m=+327.839838319" lastFinishedPulling="2026-01-21 00:11:38.07805277 +0000 UTC m=+330.317920426" observedRunningTime="2026-01-21 00:11:38.645453174 +0000 UTC m=+330.885320830" watchObservedRunningTime="2026-01-21 00:11:38.649146403 +0000 UTC m=+330.889014049" Jan 21 00:11:41 crc kubenswrapper[4873]: I0121 00:11:41.734104 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:41 crc kubenswrapper[4873]: I0121 00:11:41.734488 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:41 crc kubenswrapper[4873]: I0121 00:11:41.788751 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:41 crc kubenswrapper[4873]: I0121 00:11:41.926792 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:41 crc kubenswrapper[4873]: I0121 00:11:41.926890 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:41 crc kubenswrapper[4873]: I0121 00:11:41.973968 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:42 crc kubenswrapper[4873]: I0121 00:11:42.690662 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-28d2j" Jan 21 00:11:42 crc kubenswrapper[4873]: I0121 00:11:42.705492 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6msps" Jan 21 00:11:44 crc kubenswrapper[4873]: I0121 00:11:44.096496 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:44 crc kubenswrapper[4873]: I0121 00:11:44.096560 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:44 crc kubenswrapper[4873]: I0121 00:11:44.155619 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:44 crc kubenswrapper[4873]: I0121 00:11:44.328709 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:44 crc kubenswrapper[4873]: I0121 00:11:44.328792 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:44 crc kubenswrapper[4873]: I0121 00:11:44.373852 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:11:44 crc kubenswrapper[4873]: I0121 00:11:44.697735 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8dtqn" Jan 21 00:11:44 crc kubenswrapper[4873]: I0121 00:11:44.723904 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:12:01 crc kubenswrapper[4873]: I0121 00:12:01.630248 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:12:01 crc kubenswrapper[4873]: I0121 00:12:01.630926 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:12:03 crc kubenswrapper[4873]: I0121 00:12:03.552304 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94tff"] Jan 21 00:12:03 crc kubenswrapper[4873]: I0121 00:12:03.553871 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:03 crc kubenswrapper[4873]: I0121 00:12:03.567449 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94tff"] Jan 21 00:12:03 crc kubenswrapper[4873]: I0121 00:12:03.736500 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80ca990b-a222-40fc-bb34-a2a5daae0bbb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.736919 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80ca990b-a222-40fc-bb34-a2a5daae0bbb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.736961 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80ca990b-a222-40fc-bb34-a2a5daae0bbb-bound-sa-token\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.736981 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80ca990b-a222-40fc-bb34-a2a5daae0bbb-registry-tls\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.736999 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80ca990b-a222-40fc-bb34-a2a5daae0bbb-registry-certificates\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.737017 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68psl\" (UniqueName: \"kubernetes.io/projected/80ca990b-a222-40fc-bb34-a2a5daae0bbb-kube-api-access-68psl\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.737042 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80ca990b-a222-40fc-bb34-a2a5daae0bbb-trusted-ca\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.737075 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.772443 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.838196 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68psl\" (UniqueName: \"kubernetes.io/projected/80ca990b-a222-40fc-bb34-a2a5daae0bbb-kube-api-access-68psl\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.838237 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80ca990b-a222-40fc-bb34-a2a5daae0bbb-trusted-ca\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.838289 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80ca990b-a222-40fc-bb34-a2a5daae0bbb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.838354 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80ca990b-a222-40fc-bb34-a2a5daae0bbb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.838409 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80ca990b-a222-40fc-bb34-a2a5daae0bbb-bound-sa-token\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.838434 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80ca990b-a222-40fc-bb34-a2a5daae0bbb-registry-tls\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.838450 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80ca990b-a222-40fc-bb34-a2a5daae0bbb-registry-certificates\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.839770 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80ca990b-a222-40fc-bb34-a2a5daae0bbb-registry-certificates\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.840007 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80ca990b-a222-40fc-bb34-a2a5daae0bbb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.840319 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80ca990b-a222-40fc-bb34-a2a5daae0bbb-trusted-ca\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.846641 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80ca990b-a222-40fc-bb34-a2a5daae0bbb-registry-tls\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.856443 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68psl\" (UniqueName: \"kubernetes.io/projected/80ca990b-a222-40fc-bb34-a2a5daae0bbb-kube-api-access-68psl\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.856882 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80ca990b-a222-40fc-bb34-a2a5daae0bbb-bound-sa-token\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.860274 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80ca990b-a222-40fc-bb34-a2a5daae0bbb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94tff\" (UID: \"80ca990b-a222-40fc-bb34-a2a5daae0bbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:03.868404 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:04.370465 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94tff"] Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:04.805846 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-94tff" event={"ID":"80ca990b-a222-40fc-bb34-a2a5daae0bbb","Type":"ContainerStarted","Data":"509542c766f6ba8608624af0daec58e7f083fa57d25b4f6ea66245dfcd2248d5"} Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:04.805893 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-94tff" event={"ID":"80ca990b-a222-40fc-bb34-a2a5daae0bbb","Type":"ContainerStarted","Data":"4bc42e66624b2f08e6712708e44d2284f830535c5543b30abadedf7e7d76c06b"} Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:04.806839 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:04 crc kubenswrapper[4873]: I0121 00:12:04.826282 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-94tff" podStartSLOduration=1.826264681 podStartE2EDuration="1.826264681s" podCreationTimestamp="2026-01-21 00:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:12:04.824310128 +0000 UTC m=+357.064177774" watchObservedRunningTime="2026-01-21 00:12:04.826264681 +0000 UTC m=+357.066132327" Jan 21 00:12:23 crc kubenswrapper[4873]: I0121 00:12:23.874448 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-94tff" Jan 21 00:12:23 crc kubenswrapper[4873]: I0121 00:12:23.944308 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-flh6k"] Jan 21 00:12:31 crc kubenswrapper[4873]: I0121 00:12:31.630911 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:12:31 crc kubenswrapper[4873]: I0121 00:12:31.631883 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.002188 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" podUID="530a993a-eb48-4622-abec-7f3af78b3c40" containerName="registry" containerID="cri-o://e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9" gracePeriod=30 Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.417651 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.522792 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-bound-sa-token\") pod \"530a993a-eb48-4622-abec-7f3af78b3c40\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.522920 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-registry-certificates\") pod \"530a993a-eb48-4622-abec-7f3af78b3c40\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.523075 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/530a993a-eb48-4622-abec-7f3af78b3c40-ca-trust-extracted\") pod \"530a993a-eb48-4622-abec-7f3af78b3c40\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.523148 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-registry-tls\") pod \"530a993a-eb48-4622-abec-7f3af78b3c40\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.523234 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/530a993a-eb48-4622-abec-7f3af78b3c40-installation-pull-secrets\") pod \"530a993a-eb48-4622-abec-7f3af78b3c40\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.523454 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"530a993a-eb48-4622-abec-7f3af78b3c40\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.523509 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v426\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-kube-api-access-5v426\") pod \"530a993a-eb48-4622-abec-7f3af78b3c40\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.523626 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-trusted-ca\") pod \"530a993a-eb48-4622-abec-7f3af78b3c40\" (UID: \"530a993a-eb48-4622-abec-7f3af78b3c40\") " Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.524252 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "530a993a-eb48-4622-abec-7f3af78b3c40" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.524303 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "530a993a-eb48-4622-abec-7f3af78b3c40" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.524807 4873 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.524848 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/530a993a-eb48-4622-abec-7f3af78b3c40-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.530204 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/530a993a-eb48-4622-abec-7f3af78b3c40-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "530a993a-eb48-4622-abec-7f3af78b3c40" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.531422 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "530a993a-eb48-4622-abec-7f3af78b3c40" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.533319 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "530a993a-eb48-4622-abec-7f3af78b3c40" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.534056 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-kube-api-access-5v426" (OuterVolumeSpecName: "kube-api-access-5v426") pod "530a993a-eb48-4622-abec-7f3af78b3c40" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40"). InnerVolumeSpecName "kube-api-access-5v426". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.540540 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "530a993a-eb48-4622-abec-7f3af78b3c40" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.549254 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530a993a-eb48-4622-abec-7f3af78b3c40-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "530a993a-eb48-4622-abec-7f3af78b3c40" (UID: "530a993a-eb48-4622-abec-7f3af78b3c40"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.626522 4873 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/530a993a-eb48-4622-abec-7f3af78b3c40-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.626652 4873 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.626674 4873 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/530a993a-eb48-4622-abec-7f3af78b3c40-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.626695 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v426\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-kube-api-access-5v426\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:49 crc kubenswrapper[4873]: I0121 00:12:49.626713 4873 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/530a993a-eb48-4622-abec-7f3af78b3c40-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:50 crc kubenswrapper[4873]: I0121 00:12:50.092107 4873 generic.go:334] "Generic (PLEG): container finished" podID="530a993a-eb48-4622-abec-7f3af78b3c40" containerID="e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9" exitCode=0 Jan 21 00:12:50 crc kubenswrapper[4873]: I0121 00:12:50.092173 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" event={"ID":"530a993a-eb48-4622-abec-7f3af78b3c40","Type":"ContainerDied","Data":"e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9"} Jan 21 00:12:50 crc kubenswrapper[4873]: I0121 00:12:50.092245 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" event={"ID":"530a993a-eb48-4622-abec-7f3af78b3c40","Type":"ContainerDied","Data":"4161fba30e621dee3504820abcc51cf0674e7cf6ec3f2c21ccc0675bbfe28ace"} Jan 21 00:12:50 crc kubenswrapper[4873]: I0121 00:12:50.092277 4873 scope.go:117] "RemoveContainer" containerID="e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9" Jan 21 00:12:50 crc kubenswrapper[4873]: I0121 00:12:50.093688 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-flh6k" Jan 21 00:12:50 crc kubenswrapper[4873]: I0121 00:12:50.118740 4873 scope.go:117] "RemoveContainer" containerID="e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9" Jan 21 00:12:50 crc kubenswrapper[4873]: E0121 00:12:50.119203 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9\": container with ID starting with e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9 not found: ID does not exist" containerID="e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9" Jan 21 00:12:50 crc kubenswrapper[4873]: I0121 00:12:50.119251 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9"} err="failed to get container status \"e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9\": rpc error: code = NotFound desc = could not find container \"e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9\": container with ID starting with e3c1d676042d3e43225ce37b9c415616c2194f06c4e91d83a81238329c4bcbe9 not found: ID does not exist" Jan 21 00:12:50 crc kubenswrapper[4873]: I0121 00:12:50.128330 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-flh6k"] Jan 21 00:12:50 crc kubenswrapper[4873]: I0121 00:12:50.136190 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-flh6k"] Jan 21 00:12:52 crc kubenswrapper[4873]: I0121 00:12:52.076739 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530a993a-eb48-4622-abec-7f3af78b3c40" path="/var/lib/kubelet/pods/530a993a-eb48-4622-abec-7f3af78b3c40/volumes" Jan 21 00:13:01 crc kubenswrapper[4873]: I0121 00:13:01.631039 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:13:01 crc kubenswrapper[4873]: I0121 00:13:01.631685 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:13:01 crc kubenswrapper[4873]: I0121 00:13:01.631753 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:13:01 crc kubenswrapper[4873]: I0121 00:13:01.632531 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ecbc66699ceff75e71816474db48ead8996d0fcac33b380626bf21ba56881845"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:13:01 crc kubenswrapper[4873]: I0121 00:13:01.632648 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://ecbc66699ceff75e71816474db48ead8996d0fcac33b380626bf21ba56881845" gracePeriod=600 Jan 21 00:13:02 crc kubenswrapper[4873]: I0121 00:13:02.174540 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="ecbc66699ceff75e71816474db48ead8996d0fcac33b380626bf21ba56881845" exitCode=0 Jan 21 00:13:02 crc kubenswrapper[4873]: I0121 00:13:02.174713 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"ecbc66699ceff75e71816474db48ead8996d0fcac33b380626bf21ba56881845"} Jan 21 00:13:02 crc kubenswrapper[4873]: I0121 00:13:02.175025 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"15c3f1fd3e4e90f15734f5086b38debb378833b2d9619dd1eb676e40cb62a9bb"} Jan 21 00:13:02 crc kubenswrapper[4873]: I0121 00:13:02.175059 4873 scope.go:117] "RemoveContainer" containerID="e2affc6ee3cc2ed589288582895617c2d7b683e71a1c44c9db844621b3488d46" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.194888 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt"] Jan 21 00:15:00 crc kubenswrapper[4873]: E0121 00:15:00.195819 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530a993a-eb48-4622-abec-7f3af78b3c40" containerName="registry" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.195841 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="530a993a-eb48-4622-abec-7f3af78b3c40" containerName="registry" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.196018 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="530a993a-eb48-4622-abec-7f3af78b3c40" containerName="registry" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.196617 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.199852 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.200412 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.205986 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt"] Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.370386 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7ws7\" (UniqueName: \"kubernetes.io/projected/466dd605-5713-4b28-8bf5-8d1198cc6689-kube-api-access-p7ws7\") pod \"collect-profiles-29482575-d8wvt\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.370504 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/466dd605-5713-4b28-8bf5-8d1198cc6689-secret-volume\") pod \"collect-profiles-29482575-d8wvt\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.370578 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/466dd605-5713-4b28-8bf5-8d1198cc6689-config-volume\") pod \"collect-profiles-29482575-d8wvt\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.472016 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ws7\" (UniqueName: \"kubernetes.io/projected/466dd605-5713-4b28-8bf5-8d1198cc6689-kube-api-access-p7ws7\") pod \"collect-profiles-29482575-d8wvt\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.472150 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/466dd605-5713-4b28-8bf5-8d1198cc6689-secret-volume\") pod \"collect-profiles-29482575-d8wvt\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.472212 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/466dd605-5713-4b28-8bf5-8d1198cc6689-config-volume\") pod \"collect-profiles-29482575-d8wvt\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.474460 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/466dd605-5713-4b28-8bf5-8d1198cc6689-config-volume\") pod \"collect-profiles-29482575-d8wvt\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.486795 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/466dd605-5713-4b28-8bf5-8d1198cc6689-secret-volume\") pod \"collect-profiles-29482575-d8wvt\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.495674 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7ws7\" (UniqueName: \"kubernetes.io/projected/466dd605-5713-4b28-8bf5-8d1198cc6689-kube-api-access-p7ws7\") pod \"collect-profiles-29482575-d8wvt\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.524453 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.708632 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt"] Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.956173 4873 generic.go:334] "Generic (PLEG): container finished" podID="466dd605-5713-4b28-8bf5-8d1198cc6689" containerID="83ee6a225fc4e5c7808b8a68c9b9758fe7364817f7985a310f31fd7859f3edfe" exitCode=0 Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.956216 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" event={"ID":"466dd605-5713-4b28-8bf5-8d1198cc6689","Type":"ContainerDied","Data":"83ee6a225fc4e5c7808b8a68c9b9758fe7364817f7985a310f31fd7859f3edfe"} Jan 21 00:15:00 crc kubenswrapper[4873]: I0121 00:15:00.956240 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" event={"ID":"466dd605-5713-4b28-8bf5-8d1198cc6689","Type":"ContainerStarted","Data":"f4b583ffa522ed0bc32e9485d97c4c8538639a46c0754d756a113c56c0daeca6"} Jan 21 00:15:01 crc kubenswrapper[4873]: I0121 00:15:01.631092 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:15:01 crc kubenswrapper[4873]: I0121 00:15:01.631199 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.301220 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.398832 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7ws7\" (UniqueName: \"kubernetes.io/projected/466dd605-5713-4b28-8bf5-8d1198cc6689-kube-api-access-p7ws7\") pod \"466dd605-5713-4b28-8bf5-8d1198cc6689\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.398924 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/466dd605-5713-4b28-8bf5-8d1198cc6689-secret-volume\") pod \"466dd605-5713-4b28-8bf5-8d1198cc6689\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.399007 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/466dd605-5713-4b28-8bf5-8d1198cc6689-config-volume\") pod \"466dd605-5713-4b28-8bf5-8d1198cc6689\" (UID: \"466dd605-5713-4b28-8bf5-8d1198cc6689\") " Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.400025 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/466dd605-5713-4b28-8bf5-8d1198cc6689-config-volume" (OuterVolumeSpecName: "config-volume") pod "466dd605-5713-4b28-8bf5-8d1198cc6689" (UID: "466dd605-5713-4b28-8bf5-8d1198cc6689"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.404518 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466dd605-5713-4b28-8bf5-8d1198cc6689-kube-api-access-p7ws7" (OuterVolumeSpecName: "kube-api-access-p7ws7") pod "466dd605-5713-4b28-8bf5-8d1198cc6689" (UID: "466dd605-5713-4b28-8bf5-8d1198cc6689"). InnerVolumeSpecName "kube-api-access-p7ws7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.405383 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466dd605-5713-4b28-8bf5-8d1198cc6689-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "466dd605-5713-4b28-8bf5-8d1198cc6689" (UID: "466dd605-5713-4b28-8bf5-8d1198cc6689"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.500379 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/466dd605-5713-4b28-8bf5-8d1198cc6689-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.500436 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7ws7\" (UniqueName: \"kubernetes.io/projected/466dd605-5713-4b28-8bf5-8d1198cc6689-kube-api-access-p7ws7\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.500457 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/466dd605-5713-4b28-8bf5-8d1198cc6689-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.970065 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" event={"ID":"466dd605-5713-4b28-8bf5-8d1198cc6689","Type":"ContainerDied","Data":"f4b583ffa522ed0bc32e9485d97c4c8538639a46c0754d756a113c56c0daeca6"} Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.970111 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4b583ffa522ed0bc32e9485d97c4c8538639a46c0754d756a113c56c0daeca6" Jan 21 00:15:02 crc kubenswrapper[4873]: I0121 00:15:02.970142 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt" Jan 21 00:15:31 crc kubenswrapper[4873]: I0121 00:15:31.630916 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:15:31 crc kubenswrapper[4873]: I0121 00:15:31.631542 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:15:32 crc kubenswrapper[4873]: I0121 00:15:32.692117 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbp72"] Jan 21 00:15:32 crc kubenswrapper[4873]: I0121 00:15:32.692672 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovn-controller" containerID="cri-o://8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05" gracePeriod=30 Jan 21 00:15:32 crc kubenswrapper[4873]: I0121 00:15:32.692772 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="northd" containerID="cri-o://c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60" gracePeriod=30 Jan 21 00:15:32 crc kubenswrapper[4873]: I0121 00:15:32.692834 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="kube-rbac-proxy-node" containerID="cri-o://d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278" gracePeriod=30 Jan 21 00:15:32 crc kubenswrapper[4873]: I0121 00:15:32.692866 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovn-acl-logging" containerID="cri-o://38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31" gracePeriod=30 Jan 21 00:15:32 crc kubenswrapper[4873]: I0121 00:15:32.692916 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="sbdb" containerID="cri-o://2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82" gracePeriod=30 Jan 21 00:15:32 crc kubenswrapper[4873]: I0121 00:15:32.692919 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="nbdb" containerID="cri-o://d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb" gracePeriod=30 Jan 21 00:15:32 crc kubenswrapper[4873]: I0121 00:15:32.692942 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def" gracePeriod=30 Jan 21 00:15:32 crc kubenswrapper[4873]: I0121 00:15:32.741214 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" containerID="cri-o://2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322" gracePeriod=30 Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.046725 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/3.log" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.049020 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovn-acl-logging/0.log" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.049568 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovn-controller/0.log" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.049964 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121198 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4nz64"] Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121533 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="northd" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121584 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="northd" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121611 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121625 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121641 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121654 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121667 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="sbdb" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121679 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="sbdb" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121695 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="kubecfg-setup" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121708 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="kubecfg-setup" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121721 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovn-acl-logging" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121733 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovn-acl-logging" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121753 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovn-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121766 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovn-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121783 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="kube-rbac-proxy-node" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121796 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="kube-rbac-proxy-node" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121812 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121823 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121841 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466dd605-5713-4b28-8bf5-8d1198cc6689" containerName="collect-profiles" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121854 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="466dd605-5713-4b28-8bf5-8d1198cc6689" containerName="collect-profiles" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121872 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121884 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.121900 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="nbdb" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.121911 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="nbdb" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122072 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="466dd605-5713-4b28-8bf5-8d1198cc6689" containerName="collect-profiles" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122087 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovn-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122106 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="nbdb" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122124 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovn-acl-logging" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122142 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122155 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122167 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="northd" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122179 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122197 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122213 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122230 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="sbdb" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122246 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="kube-rbac-proxy-node" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.122412 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122431 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122665 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.122827 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.122841 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerName="ovnkube-controller" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.125964 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.164713 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovnkube-controller/3.log" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.167013 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovn-acl-logging/0.log" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168169 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbp72_12879027-cbf4-4393-a71e-2a42d8c9f0fe/ovn-controller/0.log" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168686 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322" exitCode=0 Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168723 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82" exitCode=0 Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168770 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb" exitCode=0 Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168791 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60" exitCode=0 Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168804 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def" exitCode=0 Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168814 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278" exitCode=0 Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168822 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31" exitCode=143 Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168859 4873 generic.go:334] "Generic (PLEG): container finished" podID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" containerID="8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05" exitCode=143 Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168914 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168946 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168961 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168974 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.168990 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169004 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169018 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169030 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169039 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169047 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169055 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169062 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169069 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169077 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169084 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169094 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169106 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169115 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169123 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169129 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169136 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169143 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169150 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169158 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169166 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169173 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169183 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169194 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169202 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169210 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169217 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169224 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169231 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169238 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169245 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169253 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169260 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169270 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" event={"ID":"12879027-cbf4-4393-a71e-2a42d8c9f0fe","Type":"ContainerDied","Data":"4b7c0c75b706352fe70be863e2cf7e97d65cca8d72629e486a1110a5f310e62d"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169281 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169289 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169296 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169303 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169310 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169317 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169324 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169331 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169338 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169345 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169361 4873 scope.go:117] "RemoveContainer" containerID="2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.169591 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbp72" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.173261 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/2.log" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.173790 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/1.log" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.173837 4873 generic.go:334] "Generic (PLEG): container finished" podID="fc2b4503-97f2-44cb-a1ad-e558df352294" containerID="c869f9e6f90c252b9e52ba1e1dd55199aa3802419bf58c37706840c300b511b9" exitCode=2 Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.173865 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nfrvx" event={"ID":"fc2b4503-97f2-44cb-a1ad-e558df352294","Type":"ContainerDied","Data":"c869f9e6f90c252b9e52ba1e1dd55199aa3802419bf58c37706840c300b511b9"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.173886 4873 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6"} Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.174311 4873 scope.go:117] "RemoveContainer" containerID="c869f9e6f90c252b9e52ba1e1dd55199aa3802419bf58c37706840c300b511b9" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.174539 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nfrvx_openshift-multus(fc2b4503-97f2-44cb-a1ad-e558df352294)\"" pod="openshift-multus/multus-nfrvx" podUID="fc2b4503-97f2-44cb-a1ad-e558df352294" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.184534 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.203141 4873 scope.go:117] "RemoveContainer" containerID="2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210230 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-env-overrides\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210287 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-script-lib\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210315 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-slash\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210336 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210359 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-netns\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210385 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-ovn-kubernetes\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210414 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-log-socket\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210444 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-openvswitch\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210470 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-systemd-units\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210484 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-node-log\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210503 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-config\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210517 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-ovn\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210672 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-netd\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210713 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-var-lib-openvswitch\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210743 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rt4d\" (UniqueName: \"kubernetes.io/projected/12879027-cbf4-4393-a71e-2a42d8c9f0fe-kube-api-access-8rt4d\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210765 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-systemd\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210705 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210783 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovn-node-metrics-cert\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210800 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-etc-openvswitch\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210805 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210820 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-kubelet\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210837 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-bin\") pod \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\" (UID: \"12879027-cbf4-4393-a71e-2a42d8c9f0fe\") " Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210952 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-var-lib-openvswitch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210977 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-cni-netd\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.210997 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-run-ovn-kubernetes\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211042 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-kubelet\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211061 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-systemd-units\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211076 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-cni-bin\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211111 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-run-systemd\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211129 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh9ch\" (UniqueName: \"kubernetes.io/projected/0729dcbe-65cd-4b4c-860c-06b917e45674-kube-api-access-qh9ch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211144 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211155 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-slash\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211193 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-slash" (OuterVolumeSpecName: "host-slash") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211209 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-run-netns\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211234 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-run-openvswitch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211276 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-node-log\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211296 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0729dcbe-65cd-4b4c-860c-06b917e45674-ovn-node-metrics-cert\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211319 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-etc-openvswitch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211361 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-log-socket\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211384 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0729dcbe-65cd-4b4c-860c-06b917e45674-ovnkube-script-lib\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211405 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211455 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0729dcbe-65cd-4b4c-860c-06b917e45674-ovnkube-config\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211489 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-run-ovn\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211534 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0729dcbe-65cd-4b4c-860c-06b917e45674-env-overrides\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211599 4873 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211614 4873 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211647 4873 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211213 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211226 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211237 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211271 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-log-socket" (OuterVolumeSpecName: "log-socket") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211285 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211313 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-node-log" (OuterVolumeSpecName: "node-log") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211803 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211837 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211862 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211912 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211935 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.211981 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.212008 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.218163 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.218403 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12879027-cbf4-4393-a71e-2a42d8c9f0fe-kube-api-access-8rt4d" (OuterVolumeSpecName: "kube-api-access-8rt4d") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "kube-api-access-8rt4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.221673 4873 scope.go:117] "RemoveContainer" containerID="d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.227329 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "12879027-cbf4-4393-a71e-2a42d8c9f0fe" (UID: "12879027-cbf4-4393-a71e-2a42d8c9f0fe"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.237478 4873 scope.go:117] "RemoveContainer" containerID="c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.250212 4873 scope.go:117] "RemoveContainer" containerID="920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.263496 4873 scope.go:117] "RemoveContainer" containerID="d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.278079 4873 scope.go:117] "RemoveContainer" containerID="38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.291639 4873 scope.go:117] "RemoveContainer" containerID="8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.305327 4873 scope.go:117] "RemoveContainer" containerID="65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312143 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0729dcbe-65cd-4b4c-860c-06b917e45674-env-overrides\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312181 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-var-lib-openvswitch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312237 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-cni-netd\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312277 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-run-ovn-kubernetes\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312304 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-kubelet\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312338 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-systemd-units\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312359 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-cni-bin\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312380 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-run-systemd\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312403 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh9ch\" (UniqueName: \"kubernetes.io/projected/0729dcbe-65cd-4b4c-860c-06b917e45674-kube-api-access-qh9ch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312431 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-slash\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312472 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-run-netns\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312499 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-run-openvswitch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312519 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-node-log\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312542 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0729dcbe-65cd-4b4c-860c-06b917e45674-ovn-node-metrics-cert\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312593 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-etc-openvswitch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312616 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-log-socket\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312641 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0729dcbe-65cd-4b4c-860c-06b917e45674-ovnkube-script-lib\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312666 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312691 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0729dcbe-65cd-4b4c-860c-06b917e45674-ovnkube-config\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312724 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-run-ovn\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312817 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0729dcbe-65cd-4b4c-860c-06b917e45674-env-overrides\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312824 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-node-log\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312885 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-run-openvswitch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312887 4873 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312907 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-run-ovn\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312935 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-cni-bin\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312938 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-run-netns\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312990 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-kubelet\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.312974 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-run-systemd\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313025 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-log-socket\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313084 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-var-lib-openvswitch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313195 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-run-ovn-kubernetes\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313463 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-systemd-units\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313533 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313597 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-slash\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313628 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-host-cni-netd\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313689 4873 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313704 4873 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313746 4873 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313762 4873 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313778 4873 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313794 4873 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313832 4873 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313012 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0729dcbe-65cd-4b4c-860c-06b917e45674-etc-openvswitch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313846 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0729dcbe-65cd-4b4c-860c-06b917e45674-ovnkube-config\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313846 4873 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313909 4873 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313926 4873 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313941 4873 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313954 4873 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313969 4873 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313985 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rt4d\" (UniqueName: \"kubernetes.io/projected/12879027-cbf4-4393-a71e-2a42d8c9f0fe-kube-api-access-8rt4d\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.313997 4873 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/12879027-cbf4-4393-a71e-2a42d8c9f0fe-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.314009 4873 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12879027-cbf4-4393-a71e-2a42d8c9f0fe-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.314324 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0729dcbe-65cd-4b4c-860c-06b917e45674-ovnkube-script-lib\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.315928 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0729dcbe-65cd-4b4c-860c-06b917e45674-ovn-node-metrics-cert\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.320962 4873 scope.go:117] "RemoveContainer" containerID="2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.321482 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": container with ID starting with 2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322 not found: ID does not exist" containerID="2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.321524 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322"} err="failed to get container status \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": rpc error: code = NotFound desc = could not find container \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": container with ID starting with 2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.321714 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.322120 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\": container with ID starting with bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3 not found: ID does not exist" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.322157 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3"} err="failed to get container status \"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\": rpc error: code = NotFound desc = could not find container \"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\": container with ID starting with bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.322176 4873 scope.go:117] "RemoveContainer" containerID="2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.322479 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\": container with ID starting with 2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82 not found: ID does not exist" containerID="2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.322510 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82"} err="failed to get container status \"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\": rpc error: code = NotFound desc = could not find container \"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\": container with ID starting with 2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.322535 4873 scope.go:117] "RemoveContainer" containerID="d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.322980 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\": container with ID starting with d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb not found: ID does not exist" containerID="d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.323014 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb"} err="failed to get container status \"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\": rpc error: code = NotFound desc = could not find container \"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\": container with ID starting with d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.323035 4873 scope.go:117] "RemoveContainer" containerID="c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.323286 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\": container with ID starting with c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60 not found: ID does not exist" containerID="c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.323313 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60"} err="failed to get container status \"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\": rpc error: code = NotFound desc = could not find container \"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\": container with ID starting with c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.323330 4873 scope.go:117] "RemoveContainer" containerID="920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.323729 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\": container with ID starting with 920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def not found: ID does not exist" containerID="920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.323760 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def"} err="failed to get container status \"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\": rpc error: code = NotFound desc = could not find container \"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\": container with ID starting with 920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.323777 4873 scope.go:117] "RemoveContainer" containerID="d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.324078 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\": container with ID starting with d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278 not found: ID does not exist" containerID="d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.324105 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278"} err="failed to get container status \"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\": rpc error: code = NotFound desc = could not find container \"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\": container with ID starting with d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.324123 4873 scope.go:117] "RemoveContainer" containerID="38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.324364 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\": container with ID starting with 38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31 not found: ID does not exist" containerID="38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.324393 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31"} err="failed to get container status \"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\": rpc error: code = NotFound desc = could not find container \"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\": container with ID starting with 38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.324409 4873 scope.go:117] "RemoveContainer" containerID="8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.324731 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\": container with ID starting with 8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05 not found: ID does not exist" containerID="8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.324759 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05"} err="failed to get container status \"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\": rpc error: code = NotFound desc = could not find container \"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\": container with ID starting with 8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.324774 4873 scope.go:117] "RemoveContainer" containerID="65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523" Jan 21 00:15:33 crc kubenswrapper[4873]: E0121 00:15:33.325250 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\": container with ID starting with 65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523 not found: ID does not exist" containerID="65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.325277 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523"} err="failed to get container status \"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\": rpc error: code = NotFound desc = could not find container \"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\": container with ID starting with 65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.325293 4873 scope.go:117] "RemoveContainer" containerID="2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.325625 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322"} err="failed to get container status \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": rpc error: code = NotFound desc = could not find container \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": container with ID starting with 2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.325654 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.325945 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3"} err="failed to get container status \"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\": rpc error: code = NotFound desc = could not find container \"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\": container with ID starting with bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.325991 4873 scope.go:117] "RemoveContainer" containerID="2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.326285 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82"} err="failed to get container status \"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\": rpc error: code = NotFound desc = could not find container \"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\": container with ID starting with 2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.326315 4873 scope.go:117] "RemoveContainer" containerID="d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.326810 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb"} err="failed to get container status \"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\": rpc error: code = NotFound desc = could not find container \"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\": container with ID starting with d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.326870 4873 scope.go:117] "RemoveContainer" containerID="c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.327399 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60"} err="failed to get container status \"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\": rpc error: code = NotFound desc = could not find container \"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\": container with ID starting with c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.327429 4873 scope.go:117] "RemoveContainer" containerID="920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.327755 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def"} err="failed to get container status \"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\": rpc error: code = NotFound desc = could not find container \"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\": container with ID starting with 920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.327780 4873 scope.go:117] "RemoveContainer" containerID="d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.328107 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278"} err="failed to get container status \"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\": rpc error: code = NotFound desc = could not find container \"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\": container with ID starting with d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.328153 4873 scope.go:117] "RemoveContainer" containerID="38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.328435 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31"} err="failed to get container status \"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\": rpc error: code = NotFound desc = could not find container \"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\": container with ID starting with 38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.328457 4873 scope.go:117] "RemoveContainer" containerID="8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.328757 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05"} err="failed to get container status \"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\": rpc error: code = NotFound desc = could not find container \"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\": container with ID starting with 8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.328794 4873 scope.go:117] "RemoveContainer" containerID="65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.329048 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523"} err="failed to get container status \"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\": rpc error: code = NotFound desc = could not find container \"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\": container with ID starting with 65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.329071 4873 scope.go:117] "RemoveContainer" containerID="2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.329288 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322"} err="failed to get container status \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": rpc error: code = NotFound desc = could not find container \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": container with ID starting with 2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.329305 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.329509 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3"} err="failed to get container status \"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\": rpc error: code = NotFound desc = could not find container \"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\": container with ID starting with bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.329525 4873 scope.go:117] "RemoveContainer" containerID="2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.329741 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82"} err="failed to get container status \"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\": rpc error: code = NotFound desc = could not find container \"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\": container with ID starting with 2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.329757 4873 scope.go:117] "RemoveContainer" containerID="d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.329957 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb"} err="failed to get container status \"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\": rpc error: code = NotFound desc = could not find container \"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\": container with ID starting with d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.329976 4873 scope.go:117] "RemoveContainer" containerID="c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.330287 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60"} err="failed to get container status \"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\": rpc error: code = NotFound desc = could not find container \"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\": container with ID starting with c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.330304 4873 scope.go:117] "RemoveContainer" containerID="920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.330605 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def"} err="failed to get container status \"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\": rpc error: code = NotFound desc = could not find container \"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\": container with ID starting with 920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.330623 4873 scope.go:117] "RemoveContainer" containerID="d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.330885 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278"} err="failed to get container status \"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\": rpc error: code = NotFound desc = could not find container \"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\": container with ID starting with d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.330902 4873 scope.go:117] "RemoveContainer" containerID="38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.331368 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31"} err="failed to get container status \"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\": rpc error: code = NotFound desc = could not find container \"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\": container with ID starting with 38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.331390 4873 scope.go:117] "RemoveContainer" containerID="8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.331717 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05"} err="failed to get container status \"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\": rpc error: code = NotFound desc = could not find container \"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\": container with ID starting with 8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.331733 4873 scope.go:117] "RemoveContainer" containerID="65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.332073 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523"} err="failed to get container status \"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\": rpc error: code = NotFound desc = could not find container \"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\": container with ID starting with 65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.332091 4873 scope.go:117] "RemoveContainer" containerID="2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.332391 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322"} err="failed to get container status \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": rpc error: code = NotFound desc = could not find container \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": container with ID starting with 2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.332445 4873 scope.go:117] "RemoveContainer" containerID="bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.332754 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3"} err="failed to get container status \"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\": rpc error: code = NotFound desc = could not find container \"bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3\": container with ID starting with bcccaf0be095a74ceffeff44b69e920d9a6aa0d9dd6bf53b6e4df54d5f1203f3 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.332771 4873 scope.go:117] "RemoveContainer" containerID="2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.333066 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82"} err="failed to get container status \"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\": rpc error: code = NotFound desc = could not find container \"2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82\": container with ID starting with 2da44d6607014089cda9397ad798b3c5021ff6b3bbc417e1b55b2c922894ba82 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.333085 4873 scope.go:117] "RemoveContainer" containerID="d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.334156 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb"} err="failed to get container status \"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\": rpc error: code = NotFound desc = could not find container \"d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb\": container with ID starting with d14312f921bba4fcc90b8125df9accf8ec8787d97275b818555b02ea269c26cb not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.334174 4873 scope.go:117] "RemoveContainer" containerID="c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.334404 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60"} err="failed to get container status \"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\": rpc error: code = NotFound desc = could not find container \"c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60\": container with ID starting with c87f03f6ddf56808e1800922bda6510ef1e911d885340e31fb7b9772d8ef7e60 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.334421 4873 scope.go:117] "RemoveContainer" containerID="920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.334765 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def"} err="failed to get container status \"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\": rpc error: code = NotFound desc = could not find container \"920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def\": container with ID starting with 920cb441ef4b6b897145e4684874f84a46dc8ca477f467a7dca87b27a29a8def not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.334783 4873 scope.go:117] "RemoveContainer" containerID="d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.334947 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278"} err="failed to get container status \"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\": rpc error: code = NotFound desc = could not find container \"d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278\": container with ID starting with d8ae50c5992b8d3e3c4dda114a82b4dec24ca3cbdf727444bedad5ac6c76a278 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.334963 4873 scope.go:117] "RemoveContainer" containerID="38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.335145 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31"} err="failed to get container status \"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\": rpc error: code = NotFound desc = could not find container \"38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31\": container with ID starting with 38086176d16a33ae062d89c3ec200db6526eb1bd73ead83705e1e0077f026f31 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.335163 4873 scope.go:117] "RemoveContainer" containerID="8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.335491 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05"} err="failed to get container status \"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\": rpc error: code = NotFound desc = could not find container \"8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05\": container with ID starting with 8169bb4b2ede6ec822ba68f4eb34b3e2a2450f77b8c01d8f7a084bb3b6adda05 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.335510 4873 scope.go:117] "RemoveContainer" containerID="65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.335687 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523"} err="failed to get container status \"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\": rpc error: code = NotFound desc = could not find container \"65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523\": container with ID starting with 65ad0d1853be2ec7738428d23de00d1c59b2affbea88008a12854a397eff5523 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.335802 4873 scope.go:117] "RemoveContainer" containerID="2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.336064 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322"} err="failed to get container status \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": rpc error: code = NotFound desc = could not find container \"2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322\": container with ID starting with 2b11ac7502a3f9dc7cf01911128321ed621739551ad586b5870c886aed83f322 not found: ID does not exist" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.340239 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh9ch\" (UniqueName: \"kubernetes.io/projected/0729dcbe-65cd-4b4c-860c-06b917e45674-kube-api-access-qh9ch\") pod \"ovnkube-node-4nz64\" (UID: \"0729dcbe-65cd-4b4c-860c-06b917e45674\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.442223 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.507982 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbp72"] Jan 21 00:15:33 crc kubenswrapper[4873]: I0121 00:15:33.511536 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbp72"] Jan 21 00:15:34 crc kubenswrapper[4873]: I0121 00:15:34.071631 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12879027-cbf4-4393-a71e-2a42d8c9f0fe" path="/var/lib/kubelet/pods/12879027-cbf4-4393-a71e-2a42d8c9f0fe/volumes" Jan 21 00:15:34 crc kubenswrapper[4873]: I0121 00:15:34.180935 4873 generic.go:334] "Generic (PLEG): container finished" podID="0729dcbe-65cd-4b4c-860c-06b917e45674" containerID="ba641f003cf43abf3faa4c97cafce00c84cf48d7bbbdfdf2263f12894ea72864" exitCode=0 Jan 21 00:15:34 crc kubenswrapper[4873]: I0121 00:15:34.180983 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" event={"ID":"0729dcbe-65cd-4b4c-860c-06b917e45674","Type":"ContainerDied","Data":"ba641f003cf43abf3faa4c97cafce00c84cf48d7bbbdfdf2263f12894ea72864"} Jan 21 00:15:34 crc kubenswrapper[4873]: I0121 00:15:34.181009 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" event={"ID":"0729dcbe-65cd-4b4c-860c-06b917e45674","Type":"ContainerStarted","Data":"d19dcbb3ca0011d0b45cdb869cfc61f3119e4ada2ddf74d7fb44d6fc7d11aa7c"} Jan 21 00:15:35 crc kubenswrapper[4873]: I0121 00:15:35.191379 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" event={"ID":"0729dcbe-65cd-4b4c-860c-06b917e45674","Type":"ContainerStarted","Data":"9c0d44dff30645deb33626dab2020d20cf4d50d9633c61c8d4097d6b873b8f21"} Jan 21 00:15:35 crc kubenswrapper[4873]: I0121 00:15:35.191741 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" event={"ID":"0729dcbe-65cd-4b4c-860c-06b917e45674","Type":"ContainerStarted","Data":"30cccf14f82653f410174464c4693e69cadb121edbc6d67e058f320e9099ee81"} Jan 21 00:15:35 crc kubenswrapper[4873]: I0121 00:15:35.191754 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" event={"ID":"0729dcbe-65cd-4b4c-860c-06b917e45674","Type":"ContainerStarted","Data":"9861fcb17140680add1611e8bfa1e84dd9d5eb80bb9d5d51f3bbb619efa81c10"} Jan 21 00:15:35 crc kubenswrapper[4873]: I0121 00:15:35.191764 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" event={"ID":"0729dcbe-65cd-4b4c-860c-06b917e45674","Type":"ContainerStarted","Data":"4ebf89ce5a087425aaa0f36565e11d850223ebbe52a70d1f60e11cf06339b543"} Jan 21 00:15:35 crc kubenswrapper[4873]: I0121 00:15:35.191773 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" event={"ID":"0729dcbe-65cd-4b4c-860c-06b917e45674","Type":"ContainerStarted","Data":"320878643d45246a3b43f3df61c95c9b442846c57279742aad22e5667f3cb742"} Jan 21 00:15:35 crc kubenswrapper[4873]: I0121 00:15:35.191783 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" event={"ID":"0729dcbe-65cd-4b4c-860c-06b917e45674","Type":"ContainerStarted","Data":"8bffb6e326f414f18c3fd6235fd693900472104f121ed54f577a00bcb2cd66e1"} Jan 21 00:15:37 crc kubenswrapper[4873]: I0121 00:15:37.205638 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" event={"ID":"0729dcbe-65cd-4b4c-860c-06b917e45674","Type":"ContainerStarted","Data":"86032498cca7a142add6f707b1f4fc1574fc340f08e1fbaebfb733a6aa67aa18"} Jan 21 00:15:40 crc kubenswrapper[4873]: I0121 00:15:40.229844 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" event={"ID":"0729dcbe-65cd-4b4c-860c-06b917e45674","Type":"ContainerStarted","Data":"98dddea3dddb5b92e19f04add58cb585924effc4734dca774efc23c03ace27f1"} Jan 21 00:15:40 crc kubenswrapper[4873]: I0121 00:15:40.230465 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:40 crc kubenswrapper[4873]: I0121 00:15:40.230482 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:40 crc kubenswrapper[4873]: I0121 00:15:40.271015 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" podStartSLOduration=7.270991041 podStartE2EDuration="7.270991041s" podCreationTimestamp="2026-01-21 00:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:15:40.263540217 +0000 UTC m=+572.503407913" watchObservedRunningTime="2026-01-21 00:15:40.270991041 +0000 UTC m=+572.510858727" Jan 21 00:15:41 crc kubenswrapper[4873]: I0121 00:15:41.236707 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:41 crc kubenswrapper[4873]: I0121 00:15:41.637197 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:41 crc kubenswrapper[4873]: I0121 00:15:41.638021 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:15:48 crc kubenswrapper[4873]: I0121 00:15:48.066948 4873 scope.go:117] "RemoveContainer" containerID="c869f9e6f90c252b9e52ba1e1dd55199aa3802419bf58c37706840c300b511b9" Jan 21 00:15:48 crc kubenswrapper[4873]: E0121 00:15:48.068057 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nfrvx_openshift-multus(fc2b4503-97f2-44cb-a1ad-e558df352294)\"" pod="openshift-multus/multus-nfrvx" podUID="fc2b4503-97f2-44cb-a1ad-e558df352294" Jan 21 00:15:59 crc kubenswrapper[4873]: I0121 00:15:59.063951 4873 scope.go:117] "RemoveContainer" containerID="c869f9e6f90c252b9e52ba1e1dd55199aa3802419bf58c37706840c300b511b9" Jan 21 00:15:59 crc kubenswrapper[4873]: I0121 00:15:59.363883 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/2.log" Jan 21 00:15:59 crc kubenswrapper[4873]: I0121 00:15:59.364955 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/1.log" Jan 21 00:15:59 crc kubenswrapper[4873]: I0121 00:15:59.365008 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nfrvx" event={"ID":"fc2b4503-97f2-44cb-a1ad-e558df352294","Type":"ContainerStarted","Data":"d9fda004ce6578cd539fb087eb4af5b0162691c1d94b3af191ef3023823120c2"} Jan 21 00:16:01 crc kubenswrapper[4873]: I0121 00:16:01.630370 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:16:01 crc kubenswrapper[4873]: I0121 00:16:01.630855 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:16:01 crc kubenswrapper[4873]: I0121 00:16:01.630967 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:16:01 crc kubenswrapper[4873]: I0121 00:16:01.631942 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15c3f1fd3e4e90f15734f5086b38debb378833b2d9619dd1eb676e40cb62a9bb"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:16:01 crc kubenswrapper[4873]: I0121 00:16:01.632042 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://15c3f1fd3e4e90f15734f5086b38debb378833b2d9619dd1eb676e40cb62a9bb" gracePeriod=600 Jan 21 00:16:02 crc kubenswrapper[4873]: I0121 00:16:02.384940 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="15c3f1fd3e4e90f15734f5086b38debb378833b2d9619dd1eb676e40cb62a9bb" exitCode=0 Jan 21 00:16:02 crc kubenswrapper[4873]: I0121 00:16:02.384982 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"15c3f1fd3e4e90f15734f5086b38debb378833b2d9619dd1eb676e40cb62a9bb"} Jan 21 00:16:02 crc kubenswrapper[4873]: I0121 00:16:02.385014 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"8cef0a4d1b8c465b381f96afc08d2aae348faec8f26beb9b75c912c1d64983ba"} Jan 21 00:16:02 crc kubenswrapper[4873]: I0121 00:16:02.385033 4873 scope.go:117] "RemoveContainer" containerID="ecbc66699ceff75e71816474db48ead8996d0fcac33b380626bf21ba56881845" Jan 21 00:16:03 crc kubenswrapper[4873]: I0121 00:16:03.470626 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4nz64" Jan 21 00:16:08 crc kubenswrapper[4873]: I0121 00:16:08.290691 4873 scope.go:117] "RemoveContainer" containerID="739eac4c9891293a44bb164198a62c0f220ef5efc9fda47a8a8619bfc0c1bce6" Jan 21 00:16:08 crc kubenswrapper[4873]: I0121 00:16:08.420433 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/2.log" Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.502361 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4p44"] Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.503212 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l4p44" podUID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerName="registry-server" containerID="cri-o://5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d" gracePeriod=30 Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.829919 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.860970 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-utilities\") pod \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.861048 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lskk4\" (UniqueName: \"kubernetes.io/projected/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-kube-api-access-lskk4\") pod \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.861100 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-catalog-content\") pod \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\" (UID: \"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6\") " Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.861969 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-utilities" (OuterVolumeSpecName: "utilities") pod "b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" (UID: "b1486aca-e3e4-4bd3-9048-89fd1fd0aef6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.867423 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-kube-api-access-lskk4" (OuterVolumeSpecName: "kube-api-access-lskk4") pod "b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" (UID: "b1486aca-e3e4-4bd3-9048-89fd1fd0aef6"). InnerVolumeSpecName "kube-api-access-lskk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.881908 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" (UID: "b1486aca-e3e4-4bd3-9048-89fd1fd0aef6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.961823 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lskk4\" (UniqueName: \"kubernetes.io/projected/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-kube-api-access-lskk4\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.961855 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:39 crc kubenswrapper[4873]: I0121 00:16:39.961866 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.612433 4873 generic.go:334] "Generic (PLEG): container finished" podID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerID="5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d" exitCode=0 Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.612504 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4p44" Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.612495 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4p44" event={"ID":"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6","Type":"ContainerDied","Data":"5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d"} Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.612635 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4p44" event={"ID":"b1486aca-e3e4-4bd3-9048-89fd1fd0aef6","Type":"ContainerDied","Data":"68440c6af5d516fe7973fa010e49a86f55e68a5ef1fcd7d31e9e052815f7a9fd"} Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.612669 4873 scope.go:117] "RemoveContainer" containerID="5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d" Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.637583 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4p44"] Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.641390 4873 scope.go:117] "RemoveContainer" containerID="861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d" Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.644258 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4p44"] Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.662381 4873 scope.go:117] "RemoveContainer" containerID="04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9" Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.697198 4873 scope.go:117] "RemoveContainer" containerID="5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d" Jan 21 00:16:40 crc kubenswrapper[4873]: E0121 00:16:40.698107 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d\": container with ID starting with 5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d not found: ID does not exist" containerID="5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d" Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.698194 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d"} err="failed to get container status \"5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d\": rpc error: code = NotFound desc = could not find container \"5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d\": container with ID starting with 5279af53233e105f42dce879c5ffd3ece814b672705c833a85f7790991e2030d not found: ID does not exist" Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.698235 4873 scope.go:117] "RemoveContainer" containerID="861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d" Jan 21 00:16:40 crc kubenswrapper[4873]: E0121 00:16:40.698770 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d\": container with ID starting with 861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d not found: ID does not exist" containerID="861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d" Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.698840 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d"} err="failed to get container status \"861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d\": rpc error: code = NotFound desc = could not find container \"861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d\": container with ID starting with 861829b9b907dff700dd542121a45871545a5af02a137e6acef5a4f14d54ea1d not found: ID does not exist" Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.698882 4873 scope.go:117] "RemoveContainer" containerID="04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9" Jan 21 00:16:40 crc kubenswrapper[4873]: E0121 00:16:40.699791 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9\": container with ID starting with 04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9 not found: ID does not exist" containerID="04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9" Jan 21 00:16:40 crc kubenswrapper[4873]: I0121 00:16:40.699845 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9"} err="failed to get container status \"04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9\": rpc error: code = NotFound desc = could not find container \"04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9\": container with ID starting with 04e72da2dca9537aeea83a9d21e55ac07c255d54cebb5601193b2980a915adb9 not found: ID does not exist" Jan 21 00:16:42 crc kubenswrapper[4873]: I0121 00:16:42.072936 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" path="/var/lib/kubelet/pods/b1486aca-e3e4-4bd3-9048-89fd1fd0aef6/volumes" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.124003 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz"] Jan 21 00:16:43 crc kubenswrapper[4873]: E0121 00:16:43.124968 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerName="extract-utilities" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.125079 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerName="extract-utilities" Jan 21 00:16:43 crc kubenswrapper[4873]: E0121 00:16:43.125171 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerName="registry-server" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.125249 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerName="registry-server" Jan 21 00:16:43 crc kubenswrapper[4873]: E0121 00:16:43.125391 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerName="extract-content" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.125488 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerName="extract-content" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.125729 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1486aca-e3e4-4bd3-9048-89fd1fd0aef6" containerName="registry-server" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.126937 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.129260 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.139211 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz"] Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.204179 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.204234 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8tkq\" (UniqueName: \"kubernetes.io/projected/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-kube-api-access-j8tkq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.204300 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.305530 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.305610 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8tkq\" (UniqueName: \"kubernetes.io/projected/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-kube-api-access-j8tkq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.305678 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.306184 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.306493 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.327335 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8tkq\" (UniqueName: \"kubernetes.io/projected/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-kube-api-access-j8tkq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.447814 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:43 crc kubenswrapper[4873]: I0121 00:16:43.627567 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz"] Jan 21 00:16:43 crc kubenswrapper[4873]: W0121 00:16:43.645400 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cf327e3_3f0d_4bcc_8246_4b31c21d06be.slice/crio-fb2094b244f2a2a094824b185aafe7aad5e4c523539285a120496316c0426492 WatchSource:0}: Error finding container fb2094b244f2a2a094824b185aafe7aad5e4c523539285a120496316c0426492: Status 404 returned error can't find the container with id fb2094b244f2a2a094824b185aafe7aad5e4c523539285a120496316c0426492 Jan 21 00:16:44 crc kubenswrapper[4873]: I0121 00:16:44.646484 4873 generic.go:334] "Generic (PLEG): container finished" podID="3cf327e3-3f0d-4bcc-8246-4b31c21d06be" containerID="d0254e02076148117760edf1bd45ce1b5f77b79c20ace4316457157585c2aa0a" exitCode=0 Jan 21 00:16:44 crc kubenswrapper[4873]: I0121 00:16:44.646596 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" event={"ID":"3cf327e3-3f0d-4bcc-8246-4b31c21d06be","Type":"ContainerDied","Data":"d0254e02076148117760edf1bd45ce1b5f77b79c20ace4316457157585c2aa0a"} Jan 21 00:16:44 crc kubenswrapper[4873]: I0121 00:16:44.647788 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" event={"ID":"3cf327e3-3f0d-4bcc-8246-4b31c21d06be","Type":"ContainerStarted","Data":"fb2094b244f2a2a094824b185aafe7aad5e4c523539285a120496316c0426492"} Jan 21 00:16:44 crc kubenswrapper[4873]: I0121 00:16:44.649153 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:16:46 crc kubenswrapper[4873]: I0121 00:16:46.661772 4873 generic.go:334] "Generic (PLEG): container finished" podID="3cf327e3-3f0d-4bcc-8246-4b31c21d06be" containerID="3057b729a82b8bd3be20434e1c5f72f32ff3af3411e840f7d1f72e94c723255b" exitCode=0 Jan 21 00:16:46 crc kubenswrapper[4873]: I0121 00:16:46.661895 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" event={"ID":"3cf327e3-3f0d-4bcc-8246-4b31c21d06be","Type":"ContainerDied","Data":"3057b729a82b8bd3be20434e1c5f72f32ff3af3411e840f7d1f72e94c723255b"} Jan 21 00:16:47 crc kubenswrapper[4873]: I0121 00:16:47.673888 4873 generic.go:334] "Generic (PLEG): container finished" podID="3cf327e3-3f0d-4bcc-8246-4b31c21d06be" containerID="7053e1c44a1308df6348b92dbbd7ad8b716dfe4aa839e0d79ea9eeafed9e32fb" exitCode=0 Jan 21 00:16:47 crc kubenswrapper[4873]: I0121 00:16:47.673932 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" event={"ID":"3cf327e3-3f0d-4bcc-8246-4b31c21d06be","Type":"ContainerDied","Data":"7053e1c44a1308df6348b92dbbd7ad8b716dfe4aa839e0d79ea9eeafed9e32fb"} Jan 21 00:16:48 crc kubenswrapper[4873]: I0121 00:16:48.897452 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:48 crc kubenswrapper[4873]: I0121 00:16:48.982254 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-bundle\") pod \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " Jan 21 00:16:48 crc kubenswrapper[4873]: I0121 00:16:48.982348 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8tkq\" (UniqueName: \"kubernetes.io/projected/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-kube-api-access-j8tkq\") pod \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " Jan 21 00:16:48 crc kubenswrapper[4873]: I0121 00:16:48.982421 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-util\") pod \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\" (UID: \"3cf327e3-3f0d-4bcc-8246-4b31c21d06be\") " Jan 21 00:16:48 crc kubenswrapper[4873]: I0121 00:16:48.985928 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-bundle" (OuterVolumeSpecName: "bundle") pod "3cf327e3-3f0d-4bcc-8246-4b31c21d06be" (UID: "3cf327e3-3f0d-4bcc-8246-4b31c21d06be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:16:48 crc kubenswrapper[4873]: I0121 00:16:48.991766 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-kube-api-access-j8tkq" (OuterVolumeSpecName: "kube-api-access-j8tkq") pod "3cf327e3-3f0d-4bcc-8246-4b31c21d06be" (UID: "3cf327e3-3f0d-4bcc-8246-4b31c21d06be"). InnerVolumeSpecName "kube-api-access-j8tkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.013649 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-util" (OuterVolumeSpecName: "util") pod "3cf327e3-3f0d-4bcc-8246-4b31c21d06be" (UID: "3cf327e3-3f0d-4bcc-8246-4b31c21d06be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.083615 4873 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-util\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.083649 4873 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.083660 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8tkq\" (UniqueName: \"kubernetes.io/projected/3cf327e3-3f0d-4bcc-8246-4b31c21d06be-kube-api-access-j8tkq\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.312055 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v"] Jan 21 00:16:49 crc kubenswrapper[4873]: E0121 00:16:49.312353 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf327e3-3f0d-4bcc-8246-4b31c21d06be" containerName="util" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.312371 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf327e3-3f0d-4bcc-8246-4b31c21d06be" containerName="util" Jan 21 00:16:49 crc kubenswrapper[4873]: E0121 00:16:49.312392 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf327e3-3f0d-4bcc-8246-4b31c21d06be" containerName="pull" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.312402 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf327e3-3f0d-4bcc-8246-4b31c21d06be" containerName="pull" Jan 21 00:16:49 crc kubenswrapper[4873]: E0121 00:16:49.312429 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf327e3-3f0d-4bcc-8246-4b31c21d06be" containerName="extract" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.312439 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf327e3-3f0d-4bcc-8246-4b31c21d06be" containerName="extract" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.312604 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf327e3-3f0d-4bcc-8246-4b31c21d06be" containerName="extract" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.314003 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.328333 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v"] Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.387653 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.387736 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkdmh\" (UniqueName: \"kubernetes.io/projected/75ce7252-19af-4465-afa1-11908cc182b0-kube-api-access-wkdmh\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.387775 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.489724 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.490164 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.490428 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.490618 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkdmh\" (UniqueName: \"kubernetes.io/projected/75ce7252-19af-4465-afa1-11908cc182b0-kube-api-access-wkdmh\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.491017 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.523217 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkdmh\" (UniqueName: \"kubernetes.io/projected/75ce7252-19af-4465-afa1-11908cc182b0-kube-api-access-wkdmh\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.641721 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.692060 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" event={"ID":"3cf327e3-3f0d-4bcc-8246-4b31c21d06be","Type":"ContainerDied","Data":"fb2094b244f2a2a094824b185aafe7aad5e4c523539285a120496316c0426492"} Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.692128 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb2094b244f2a2a094824b185aafe7aad5e4c523539285a120496316c0426492" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.692237 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz" Jan 21 00:16:49 crc kubenswrapper[4873]: I0121 00:16:49.861109 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v"] Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.119447 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr"] Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.125220 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.150099 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr"] Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.198866 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.199113 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.199190 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7ml\" (UniqueName: \"kubernetes.io/projected/17b840c6-394e-4161-a325-b79427f6e4e7-kube-api-access-pp7ml\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.300339 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.300431 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7ml\" (UniqueName: \"kubernetes.io/projected/17b840c6-394e-4161-a325-b79427f6e4e7-kube-api-access-pp7ml\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.300590 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.304158 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.304157 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.338110 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7ml\" (UniqueName: \"kubernetes.io/projected/17b840c6-394e-4161-a325-b79427f6e4e7-kube-api-access-pp7ml\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.455526 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.702643 4873 generic.go:334] "Generic (PLEG): container finished" podID="75ce7252-19af-4465-afa1-11908cc182b0" containerID="b8c5a87ee5c965d39bef36e030c2e7123e1bec16842c29cb681cce1868021921" exitCode=0 Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.703413 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" event={"ID":"75ce7252-19af-4465-afa1-11908cc182b0","Type":"ContainerDied","Data":"b8c5a87ee5c965d39bef36e030c2e7123e1bec16842c29cb681cce1868021921"} Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.703449 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr"] Jan 21 00:16:50 crc kubenswrapper[4873]: I0121 00:16:50.703463 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" event={"ID":"75ce7252-19af-4465-afa1-11908cc182b0","Type":"ContainerStarted","Data":"1f248a5281c485d4d938cc060a6ea9b6172426963c4da03268fce00a9fa8e64d"} Jan 21 00:16:50 crc kubenswrapper[4873]: W0121 00:16:50.710064 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b840c6_394e_4161_a325_b79427f6e4e7.slice/crio-dc91d122a1a74b3a1e655a7bda09f281d40b506277465dc463c2aaa0841debe2 WatchSource:0}: Error finding container dc91d122a1a74b3a1e655a7bda09f281d40b506277465dc463c2aaa0841debe2: Status 404 returned error can't find the container with id dc91d122a1a74b3a1e655a7bda09f281d40b506277465dc463c2aaa0841debe2 Jan 21 00:16:51 crc kubenswrapper[4873]: I0121 00:16:51.709918 4873 generic.go:334] "Generic (PLEG): container finished" podID="17b840c6-394e-4161-a325-b79427f6e4e7" containerID="651d30b2a77471edd86dd8cedb028079960d5df5e53083b55337b3f32eabebb5" exitCode=0 Jan 21 00:16:51 crc kubenswrapper[4873]: I0121 00:16:51.709999 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" event={"ID":"17b840c6-394e-4161-a325-b79427f6e4e7","Type":"ContainerDied","Data":"651d30b2a77471edd86dd8cedb028079960d5df5e53083b55337b3f32eabebb5"} Jan 21 00:16:51 crc kubenswrapper[4873]: I0121 00:16:51.710063 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" event={"ID":"17b840c6-394e-4161-a325-b79427f6e4e7","Type":"ContainerStarted","Data":"dc91d122a1a74b3a1e655a7bda09f281d40b506277465dc463c2aaa0841debe2"} Jan 21 00:16:53 crc kubenswrapper[4873]: I0121 00:16:53.721912 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" event={"ID":"17b840c6-394e-4161-a325-b79427f6e4e7","Type":"ContainerStarted","Data":"7c39a49a40ea9439f51804720dab915a965aba64723f4e148554b372a85a049b"} Jan 21 00:16:53 crc kubenswrapper[4873]: I0121 00:16:53.723758 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" event={"ID":"75ce7252-19af-4465-afa1-11908cc182b0","Type":"ContainerStarted","Data":"324e33ad41c9f69c202de7f8f040b347849453af037f75328a9010c9e69d487b"} Jan 21 00:16:54 crc kubenswrapper[4873]: I0121 00:16:54.730514 4873 generic.go:334] "Generic (PLEG): container finished" podID="17b840c6-394e-4161-a325-b79427f6e4e7" containerID="7c39a49a40ea9439f51804720dab915a965aba64723f4e148554b372a85a049b" exitCode=0 Jan 21 00:16:54 crc kubenswrapper[4873]: I0121 00:16:54.730583 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" event={"ID":"17b840c6-394e-4161-a325-b79427f6e4e7","Type":"ContainerDied","Data":"7c39a49a40ea9439f51804720dab915a965aba64723f4e148554b372a85a049b"} Jan 21 00:16:54 crc kubenswrapper[4873]: I0121 00:16:54.733800 4873 generic.go:334] "Generic (PLEG): container finished" podID="75ce7252-19af-4465-afa1-11908cc182b0" containerID="324e33ad41c9f69c202de7f8f040b347849453af037f75328a9010c9e69d487b" exitCode=0 Jan 21 00:16:54 crc kubenswrapper[4873]: I0121 00:16:54.733830 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" event={"ID":"75ce7252-19af-4465-afa1-11908cc182b0","Type":"ContainerDied","Data":"324e33ad41c9f69c202de7f8f040b347849453af037f75328a9010c9e69d487b"} Jan 21 00:16:55 crc kubenswrapper[4873]: I0121 00:16:55.739326 4873 generic.go:334] "Generic (PLEG): container finished" podID="75ce7252-19af-4465-afa1-11908cc182b0" containerID="8716ee206f2c4eb6b3bc12f949d3adb7c5f4a9718195c5cd3f67e48ae26b3c29" exitCode=0 Jan 21 00:16:55 crc kubenswrapper[4873]: I0121 00:16:55.739568 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" event={"ID":"75ce7252-19af-4465-afa1-11908cc182b0","Type":"ContainerDied","Data":"8716ee206f2c4eb6b3bc12f949d3adb7c5f4a9718195c5cd3f67e48ae26b3c29"} Jan 21 00:16:55 crc kubenswrapper[4873]: I0121 00:16:55.741410 4873 generic.go:334] "Generic (PLEG): container finished" podID="17b840c6-394e-4161-a325-b79427f6e4e7" containerID="8348e159bd03c1732c93c77702164af1fa3cb44af329d274938549f83e1ac48b" exitCode=0 Jan 21 00:16:55 crc kubenswrapper[4873]: I0121 00:16:55.741437 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" event={"ID":"17b840c6-394e-4161-a325-b79427f6e4e7","Type":"ContainerDied","Data":"8348e159bd03c1732c93c77702164af1fa3cb44af329d274938549f83e1ac48b"} Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.113469 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.175464 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-util\") pod \"75ce7252-19af-4465-afa1-11908cc182b0\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.175514 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkdmh\" (UniqueName: \"kubernetes.io/projected/75ce7252-19af-4465-afa1-11908cc182b0-kube-api-access-wkdmh\") pod \"75ce7252-19af-4465-afa1-11908cc182b0\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.175607 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-bundle\") pod \"75ce7252-19af-4465-afa1-11908cc182b0\" (UID: \"75ce7252-19af-4465-afa1-11908cc182b0\") " Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.176463 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-bundle" (OuterVolumeSpecName: "bundle") pod "75ce7252-19af-4465-afa1-11908cc182b0" (UID: "75ce7252-19af-4465-afa1-11908cc182b0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.181598 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ce7252-19af-4465-afa1-11908cc182b0-kube-api-access-wkdmh" (OuterVolumeSpecName: "kube-api-access-wkdmh") pod "75ce7252-19af-4465-afa1-11908cc182b0" (UID: "75ce7252-19af-4465-afa1-11908cc182b0"). InnerVolumeSpecName "kube-api-access-wkdmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.183658 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.199149 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-util" (OuterVolumeSpecName: "util") pod "75ce7252-19af-4465-afa1-11908cc182b0" (UID: "75ce7252-19af-4465-afa1-11908cc182b0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.276436 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-bundle\") pod \"17b840c6-394e-4161-a325-b79427f6e4e7\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.276497 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp7ml\" (UniqueName: \"kubernetes.io/projected/17b840c6-394e-4161-a325-b79427f6e4e7-kube-api-access-pp7ml\") pod \"17b840c6-394e-4161-a325-b79427f6e4e7\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.276591 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-util\") pod \"17b840c6-394e-4161-a325-b79427f6e4e7\" (UID: \"17b840c6-394e-4161-a325-b79427f6e4e7\") " Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.276815 4873 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-util\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.276838 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkdmh\" (UniqueName: \"kubernetes.io/projected/75ce7252-19af-4465-afa1-11908cc182b0-kube-api-access-wkdmh\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.276850 4873 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75ce7252-19af-4465-afa1-11908cc182b0-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.277485 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-bundle" (OuterVolumeSpecName: "bundle") pod "17b840c6-394e-4161-a325-b79427f6e4e7" (UID: "17b840c6-394e-4161-a325-b79427f6e4e7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.279748 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b840c6-394e-4161-a325-b79427f6e4e7-kube-api-access-pp7ml" (OuterVolumeSpecName: "kube-api-access-pp7ml") pod "17b840c6-394e-4161-a325-b79427f6e4e7" (UID: "17b840c6-394e-4161-a325-b79427f6e4e7"). InnerVolumeSpecName "kube-api-access-pp7ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.286881 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-util" (OuterVolumeSpecName: "util") pod "17b840c6-394e-4161-a325-b79427f6e4e7" (UID: "17b840c6-394e-4161-a325-b79427f6e4e7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.378474 4873 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-util\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.378522 4873 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17b840c6-394e-4161-a325-b79427f6e4e7-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.378540 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp7ml\" (UniqueName: \"kubernetes.io/projected/17b840c6-394e-4161-a325-b79427f6e4e7-kube-api-access-pp7ml\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.753835 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" event={"ID":"17b840c6-394e-4161-a325-b79427f6e4e7","Type":"ContainerDied","Data":"dc91d122a1a74b3a1e655a7bda09f281d40b506277465dc463c2aaa0841debe2"} Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.753870 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc91d122a1a74b3a1e655a7bda09f281d40b506277465dc463c2aaa0841debe2" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.753845 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.755829 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" event={"ID":"75ce7252-19af-4465-afa1-11908cc182b0","Type":"ContainerDied","Data":"1f248a5281c485d4d938cc060a6ea9b6172426963c4da03268fce00a9fa8e64d"} Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.755861 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f248a5281c485d4d938cc060a6ea9b6172426963c4da03268fce00a9fa8e64d" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.755883 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.935663 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f"] Jan 21 00:16:57 crc kubenswrapper[4873]: E0121 00:16:57.935918 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b840c6-394e-4161-a325-b79427f6e4e7" containerName="util" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.935934 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b840c6-394e-4161-a325-b79427f6e4e7" containerName="util" Jan 21 00:16:57 crc kubenswrapper[4873]: E0121 00:16:57.935945 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ce7252-19af-4465-afa1-11908cc182b0" containerName="util" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.935952 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ce7252-19af-4465-afa1-11908cc182b0" containerName="util" Jan 21 00:16:57 crc kubenswrapper[4873]: E0121 00:16:57.935964 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b840c6-394e-4161-a325-b79427f6e4e7" containerName="pull" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.935972 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b840c6-394e-4161-a325-b79427f6e4e7" containerName="pull" Jan 21 00:16:57 crc kubenswrapper[4873]: E0121 00:16:57.935984 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ce7252-19af-4465-afa1-11908cc182b0" containerName="extract" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.935991 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ce7252-19af-4465-afa1-11908cc182b0" containerName="extract" Jan 21 00:16:57 crc kubenswrapper[4873]: E0121 00:16:57.936006 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b840c6-394e-4161-a325-b79427f6e4e7" containerName="extract" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.936013 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b840c6-394e-4161-a325-b79427f6e4e7" containerName="extract" Jan 21 00:16:57 crc kubenswrapper[4873]: E0121 00:16:57.936025 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ce7252-19af-4465-afa1-11908cc182b0" containerName="pull" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.936032 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ce7252-19af-4465-afa1-11908cc182b0" containerName="pull" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.936135 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b840c6-394e-4161-a325-b79427f6e4e7" containerName="extract" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.936155 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ce7252-19af-4465-afa1-11908cc182b0" containerName="extract" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.937058 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.940490 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.952810 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f"] Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.984752 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.984812 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:57 crc kubenswrapper[4873]: I0121 00:16:57.984832 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz8hw\" (UniqueName: \"kubernetes.io/projected/1cc90934-86a7-4617-930f-422e65f99caf-kube-api-access-lz8hw\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:58 crc kubenswrapper[4873]: I0121 00:16:58.086418 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:58 crc kubenswrapper[4873]: I0121 00:16:58.086509 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:58 crc kubenswrapper[4873]: I0121 00:16:58.086833 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz8hw\" (UniqueName: \"kubernetes.io/projected/1cc90934-86a7-4617-930f-422e65f99caf-kube-api-access-lz8hw\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:58 crc kubenswrapper[4873]: I0121 00:16:58.087079 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:58 crc kubenswrapper[4873]: I0121 00:16:58.087091 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:58 crc kubenswrapper[4873]: I0121 00:16:58.103929 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz8hw\" (UniqueName: \"kubernetes.io/projected/1cc90934-86a7-4617-930f-422e65f99caf-kube-api-access-lz8hw\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:58 crc kubenswrapper[4873]: I0121 00:16:58.250660 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:16:58 crc kubenswrapper[4873]: I0121 00:16:58.685241 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f"] Jan 21 00:16:58 crc kubenswrapper[4873]: W0121 00:16:58.693167 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cc90934_86a7_4617_930f_422e65f99caf.slice/crio-2f8583ab0c7ad0de8b10659ff83b04601784fd831eedc23ef6270f78559f4798 WatchSource:0}: Error finding container 2f8583ab0c7ad0de8b10659ff83b04601784fd831eedc23ef6270f78559f4798: Status 404 returned error can't find the container with id 2f8583ab0c7ad0de8b10659ff83b04601784fd831eedc23ef6270f78559f4798 Jan 21 00:16:58 crc kubenswrapper[4873]: I0121 00:16:58.763142 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" event={"ID":"1cc90934-86a7-4617-930f-422e65f99caf","Type":"ContainerStarted","Data":"2f8583ab0c7ad0de8b10659ff83b04601784fd831eedc23ef6270f78559f4798"} Jan 21 00:16:59 crc kubenswrapper[4873]: I0121 00:16:59.770211 4873 generic.go:334] "Generic (PLEG): container finished" podID="1cc90934-86a7-4617-930f-422e65f99caf" containerID="6ae56b66a7a1d08609a2dbd0a9a9ffbac08302c1d6f26d3b16e52daf770e49a9" exitCode=0 Jan 21 00:16:59 crc kubenswrapper[4873]: I0121 00:16:59.770287 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" event={"ID":"1cc90934-86a7-4617-930f-422e65f99caf","Type":"ContainerDied","Data":"6ae56b66a7a1d08609a2dbd0a9a9ffbac08302c1d6f26d3b16e52daf770e49a9"} Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.318973 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s"] Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.319647 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.322380 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.322622 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-zf2kt" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.323093 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.330218 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s"] Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.415624 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q8n6\" (UniqueName: \"kubernetes.io/projected/2e96a2df-caed-45ec-967c-ec7ac2705c55-kube-api-access-5q8n6\") pod \"obo-prometheus-operator-68bc856cb9-rtw4s\" (UID: \"2e96a2df-caed-45ec-967c-ec7ac2705c55\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.444854 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4"] Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.445645 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.448660 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.448939 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-62cqs" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.461855 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4"] Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.466390 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t"] Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.467427 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.486719 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t"] Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.521166 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q8n6\" (UniqueName: \"kubernetes.io/projected/2e96a2df-caed-45ec-967c-ec7ac2705c55-kube-api-access-5q8n6\") pod \"obo-prometheus-operator-68bc856cb9-rtw4s\" (UID: \"2e96a2df-caed-45ec-967c-ec7ac2705c55\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.521219 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t\" (UID: \"bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.521247 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e79fe55-0691-4477-b520-dd1d567bc5f0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4\" (UID: \"4e79fe55-0691-4477-b520-dd1d567bc5f0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.521264 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e79fe55-0691-4477-b520-dd1d567bc5f0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4\" (UID: \"4e79fe55-0691-4477-b520-dd1d567bc5f0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.521284 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t\" (UID: \"bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.540409 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q8n6\" (UniqueName: \"kubernetes.io/projected/2e96a2df-caed-45ec-967c-ec7ac2705c55-kube-api-access-5q8n6\") pod \"obo-prometheus-operator-68bc856cb9-rtw4s\" (UID: \"2e96a2df-caed-45ec-967c-ec7ac2705c55\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.622905 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t\" (UID: \"bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.622957 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e79fe55-0691-4477-b520-dd1d567bc5f0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4\" (UID: \"4e79fe55-0691-4477-b520-dd1d567bc5f0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.622978 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e79fe55-0691-4477-b520-dd1d567bc5f0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4\" (UID: \"4e79fe55-0691-4477-b520-dd1d567bc5f0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.622994 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t\" (UID: \"bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.628241 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e79fe55-0691-4477-b520-dd1d567bc5f0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4\" (UID: \"4e79fe55-0691-4477-b520-dd1d567bc5f0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.632070 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e79fe55-0691-4477-b520-dd1d567bc5f0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4\" (UID: \"4e79fe55-0691-4477-b520-dd1d567bc5f0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.639039 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t\" (UID: \"bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.639294 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.643983 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t\" (UID: \"bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.689068 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9q2cg"] Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.689709 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.691890 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-bnftl" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.693046 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.698809 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9q2cg"] Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.762560 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.791227 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.829183 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/35cf15d9-a01a-4ffe-a81a-efd6f7f974ca-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9q2cg\" (UID: \"35cf15d9-a01a-4ffe-a81a-efd6f7f974ca\") " pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.829259 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j67d8\" (UniqueName: \"kubernetes.io/projected/35cf15d9-a01a-4ffe-a81a-efd6f7f974ca-kube-api-access-j67d8\") pod \"observability-operator-59bdc8b94-9q2cg\" (UID: \"35cf15d9-a01a-4ffe-a81a-efd6f7f974ca\") " pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.851243 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-t4psx"] Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.852072 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-t4psx" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.860480 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-76rqh" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.866529 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-t4psx"] Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.931271 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/35cf15d9-a01a-4ffe-a81a-efd6f7f974ca-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9q2cg\" (UID: \"35cf15d9-a01a-4ffe-a81a-efd6f7f974ca\") " pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.931637 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qhs9\" (UniqueName: \"kubernetes.io/projected/3d39788c-bdb9-4216-80c3-da758b12627a-kube-api-access-9qhs9\") pod \"perses-operator-5bf474d74f-t4psx\" (UID: \"3d39788c-bdb9-4216-80c3-da758b12627a\") " pod="openshift-operators/perses-operator-5bf474d74f-t4psx" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.931735 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d39788c-bdb9-4216-80c3-da758b12627a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-t4psx\" (UID: \"3d39788c-bdb9-4216-80c3-da758b12627a\") " pod="openshift-operators/perses-operator-5bf474d74f-t4psx" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.931771 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j67d8\" (UniqueName: \"kubernetes.io/projected/35cf15d9-a01a-4ffe-a81a-efd6f7f974ca-kube-api-access-j67d8\") pod \"observability-operator-59bdc8b94-9q2cg\" (UID: \"35cf15d9-a01a-4ffe-a81a-efd6f7f974ca\") " pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.939884 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/35cf15d9-a01a-4ffe-a81a-efd6f7f974ca-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9q2cg\" (UID: \"35cf15d9-a01a-4ffe-a81a-efd6f7f974ca\") " pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" Jan 21 00:17:00 crc kubenswrapper[4873]: I0121 00:17:00.955995 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j67d8\" (UniqueName: \"kubernetes.io/projected/35cf15d9-a01a-4ffe-a81a-efd6f7f974ca-kube-api-access-j67d8\") pod \"observability-operator-59bdc8b94-9q2cg\" (UID: \"35cf15d9-a01a-4ffe-a81a-efd6f7f974ca\") " pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.014386 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.033020 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d39788c-bdb9-4216-80c3-da758b12627a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-t4psx\" (UID: \"3d39788c-bdb9-4216-80c3-da758b12627a\") " pod="openshift-operators/perses-operator-5bf474d74f-t4psx" Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.033250 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qhs9\" (UniqueName: \"kubernetes.io/projected/3d39788c-bdb9-4216-80c3-da758b12627a-kube-api-access-9qhs9\") pod \"perses-operator-5bf474d74f-t4psx\" (UID: \"3d39788c-bdb9-4216-80c3-da758b12627a\") " pod="openshift-operators/perses-operator-5bf474d74f-t4psx" Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.033945 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d39788c-bdb9-4216-80c3-da758b12627a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-t4psx\" (UID: \"3d39788c-bdb9-4216-80c3-da758b12627a\") " pod="openshift-operators/perses-operator-5bf474d74f-t4psx" Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.057770 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qhs9\" (UniqueName: \"kubernetes.io/projected/3d39788c-bdb9-4216-80c3-da758b12627a-kube-api-access-9qhs9\") pod \"perses-operator-5bf474d74f-t4psx\" (UID: \"3d39788c-bdb9-4216-80c3-da758b12627a\") " pod="openshift-operators/perses-operator-5bf474d74f-t4psx" Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.202819 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-t4psx" Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.295237 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t"] Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.307609 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4"] Jan 21 00:17:01 crc kubenswrapper[4873]: W0121 00:17:01.325677 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e79fe55_0691_4477_b520_dd1d567bc5f0.slice/crio-5d01e2d39ccaa85eb6849e7e74ba1a254a0ed0ddcef9ef0d5c10a8a10ddd7f80 WatchSource:0}: Error finding container 5d01e2d39ccaa85eb6849e7e74ba1a254a0ed0ddcef9ef0d5c10a8a10ddd7f80: Status 404 returned error can't find the container with id 5d01e2d39ccaa85eb6849e7e74ba1a254a0ed0ddcef9ef0d5c10a8a10ddd7f80 Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.332943 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s"] Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.451339 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9q2cg"] Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.530188 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-t4psx"] Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.786386 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" event={"ID":"bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e","Type":"ContainerStarted","Data":"14648818dd81786058922da2d39bdbdb00d77d75843c2b4bf6f98f03996f281b"} Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.788403 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" event={"ID":"35cf15d9-a01a-4ffe-a81a-efd6f7f974ca","Type":"ContainerStarted","Data":"a8dc7fd2ad30a80ac289643b7f47b3b488e8d75ffb2b002161ad8c2106970ef5"} Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.789840 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-t4psx" event={"ID":"3d39788c-bdb9-4216-80c3-da758b12627a","Type":"ContainerStarted","Data":"744ecb9d92b85ac9e1b0be7a107ab378e6c6e293dd4440fa113bf2555f945b11"} Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.795978 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s" event={"ID":"2e96a2df-caed-45ec-967c-ec7ac2705c55","Type":"ContainerStarted","Data":"03488380c3ee454fb626b24516f9f994f0d8034e475eefbe9d4ac7c18d576519"} Jan 21 00:17:01 crc kubenswrapper[4873]: I0121 00:17:01.797601 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" event={"ID":"4e79fe55-0691-4477-b520-dd1d567bc5f0","Type":"ContainerStarted","Data":"5d01e2d39ccaa85eb6849e7e74ba1a254a0ed0ddcef9ef0d5c10a8a10ddd7f80"} Jan 21 00:17:05 crc kubenswrapper[4873]: I0121 00:17:05.871125 4873 generic.go:334] "Generic (PLEG): container finished" podID="1cc90934-86a7-4617-930f-422e65f99caf" containerID="106b5a769c7610522e4e39561513b1ba7ab24a3dc8b0fab3499bf58743e46a8c" exitCode=0 Jan 21 00:17:05 crc kubenswrapper[4873]: I0121 00:17:05.871675 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" event={"ID":"1cc90934-86a7-4617-930f-422e65f99caf","Type":"ContainerDied","Data":"106b5a769c7610522e4e39561513b1ba7ab24a3dc8b0fab3499bf58743e46a8c"} Jan 21 00:17:06 crc kubenswrapper[4873]: I0121 00:17:06.895657 4873 generic.go:334] "Generic (PLEG): container finished" podID="1cc90934-86a7-4617-930f-422e65f99caf" containerID="eb5ea3dce163dc1b9355d86b9717a37372435a2a5a2b751682a18779f199e68c" exitCode=0 Jan 21 00:17:06 crc kubenswrapper[4873]: I0121 00:17:06.895710 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" event={"ID":"1cc90934-86a7-4617-930f-422e65f99caf","Type":"ContainerDied","Data":"eb5ea3dce163dc1b9355d86b9717a37372435a2a5a2b751682a18779f199e68c"} Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.371349 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6bc699db7d-tmnp5"] Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.372235 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.377929 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.377962 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.378196 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-rtbwj" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.378269 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.391290 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6bc699db7d-tmnp5"] Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.427873 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f8b8dbe-76f7-4d11-8b5d-3b067b981141-apiservice-cert\") pod \"elastic-operator-6bc699db7d-tmnp5\" (UID: \"8f8b8dbe-76f7-4d11-8b5d-3b067b981141\") " pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.427915 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7mw\" (UniqueName: \"kubernetes.io/projected/8f8b8dbe-76f7-4d11-8b5d-3b067b981141-kube-api-access-gb7mw\") pod \"elastic-operator-6bc699db7d-tmnp5\" (UID: \"8f8b8dbe-76f7-4d11-8b5d-3b067b981141\") " pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.427943 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f8b8dbe-76f7-4d11-8b5d-3b067b981141-webhook-cert\") pod \"elastic-operator-6bc699db7d-tmnp5\" (UID: \"8f8b8dbe-76f7-4d11-8b5d-3b067b981141\") " pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.529426 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f8b8dbe-76f7-4d11-8b5d-3b067b981141-apiservice-cert\") pod \"elastic-operator-6bc699db7d-tmnp5\" (UID: \"8f8b8dbe-76f7-4d11-8b5d-3b067b981141\") " pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.529486 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb7mw\" (UniqueName: \"kubernetes.io/projected/8f8b8dbe-76f7-4d11-8b5d-3b067b981141-kube-api-access-gb7mw\") pod \"elastic-operator-6bc699db7d-tmnp5\" (UID: \"8f8b8dbe-76f7-4d11-8b5d-3b067b981141\") " pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.529516 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f8b8dbe-76f7-4d11-8b5d-3b067b981141-webhook-cert\") pod \"elastic-operator-6bc699db7d-tmnp5\" (UID: \"8f8b8dbe-76f7-4d11-8b5d-3b067b981141\") " pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.537392 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f8b8dbe-76f7-4d11-8b5d-3b067b981141-webhook-cert\") pod \"elastic-operator-6bc699db7d-tmnp5\" (UID: \"8f8b8dbe-76f7-4d11-8b5d-3b067b981141\") " pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.539351 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f8b8dbe-76f7-4d11-8b5d-3b067b981141-apiservice-cert\") pod \"elastic-operator-6bc699db7d-tmnp5\" (UID: \"8f8b8dbe-76f7-4d11-8b5d-3b067b981141\") " pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.554695 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb7mw\" (UniqueName: \"kubernetes.io/projected/8f8b8dbe-76f7-4d11-8b5d-3b067b981141-kube-api-access-gb7mw\") pod \"elastic-operator-6bc699db7d-tmnp5\" (UID: \"8f8b8dbe-76f7-4d11-8b5d-3b067b981141\") " pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:07 crc kubenswrapper[4873]: I0121 00:17:07.719160 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.719335 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.777646 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz8hw\" (UniqueName: \"kubernetes.io/projected/1cc90934-86a7-4617-930f-422e65f99caf-kube-api-access-lz8hw\") pod \"1cc90934-86a7-4617-930f-422e65f99caf\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.777723 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-bundle\") pod \"1cc90934-86a7-4617-930f-422e65f99caf\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.777808 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-util\") pod \"1cc90934-86a7-4617-930f-422e65f99caf\" (UID: \"1cc90934-86a7-4617-930f-422e65f99caf\") " Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.779390 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-bundle" (OuterVolumeSpecName: "bundle") pod "1cc90934-86a7-4617-930f-422e65f99caf" (UID: "1cc90934-86a7-4617-930f-422e65f99caf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.792143 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc90934-86a7-4617-930f-422e65f99caf-kube-api-access-lz8hw" (OuterVolumeSpecName: "kube-api-access-lz8hw") pod "1cc90934-86a7-4617-930f-422e65f99caf" (UID: "1cc90934-86a7-4617-930f-422e65f99caf"). InnerVolumeSpecName "kube-api-access-lz8hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.793363 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-util" (OuterVolumeSpecName: "util") pod "1cc90934-86a7-4617-930f-422e65f99caf" (UID: "1cc90934-86a7-4617-930f-422e65f99caf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.884206 4873 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.884253 4873 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cc90934-86a7-4617-930f-422e65f99caf-util\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.884266 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz8hw\" (UniqueName: \"kubernetes.io/projected/1cc90934-86a7-4617-930f-422e65f99caf-kube-api-access-lz8hw\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.928244 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" event={"ID":"1cc90934-86a7-4617-930f-422e65f99caf","Type":"ContainerDied","Data":"2f8583ab0c7ad0de8b10659ff83b04601784fd831eedc23ef6270f78559f4798"} Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.928280 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f8583ab0c7ad0de8b10659ff83b04601784fd831eedc23ef6270f78559f4798" Jan 21 00:17:09 crc kubenswrapper[4873]: I0121 00:17:09.928341 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.122922 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-6bd24"] Jan 21 00:17:11 crc kubenswrapper[4873]: E0121 00:17:11.123175 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc90934-86a7-4617-930f-422e65f99caf" containerName="pull" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.123190 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc90934-86a7-4617-930f-422e65f99caf" containerName="pull" Jan 21 00:17:11 crc kubenswrapper[4873]: E0121 00:17:11.123212 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc90934-86a7-4617-930f-422e65f99caf" containerName="util" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.123219 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc90934-86a7-4617-930f-422e65f99caf" containerName="util" Jan 21 00:17:11 crc kubenswrapper[4873]: E0121 00:17:11.123232 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc90934-86a7-4617-930f-422e65f99caf" containerName="extract" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.123239 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc90934-86a7-4617-930f-422e65f99caf" containerName="extract" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.123347 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cc90934-86a7-4617-930f-422e65f99caf" containerName="extract" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.123846 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-6bd24" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.125636 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-hzmtk" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.143473 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-6bd24"] Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.199404 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcqh\" (UniqueName: \"kubernetes.io/projected/0fa9b095-3892-4ab4-9b8a-29ee7631bfc7-kube-api-access-6mcqh\") pod \"interconnect-operator-5bb49f789d-6bd24\" (UID: \"0fa9b095-3892-4ab4-9b8a-29ee7631bfc7\") " pod="service-telemetry/interconnect-operator-5bb49f789d-6bd24" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.301338 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcqh\" (UniqueName: \"kubernetes.io/projected/0fa9b095-3892-4ab4-9b8a-29ee7631bfc7-kube-api-access-6mcqh\") pod \"interconnect-operator-5bb49f789d-6bd24\" (UID: \"0fa9b095-3892-4ab4-9b8a-29ee7631bfc7\") " pod="service-telemetry/interconnect-operator-5bb49f789d-6bd24" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.317974 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcqh\" (UniqueName: \"kubernetes.io/projected/0fa9b095-3892-4ab4-9b8a-29ee7631bfc7-kube-api-access-6mcqh\") pod \"interconnect-operator-5bb49f789d-6bd24\" (UID: \"0fa9b095-3892-4ab4-9b8a-29ee7631bfc7\") " pod="service-telemetry/interconnect-operator-5bb49f789d-6bd24" Jan 21 00:17:11 crc kubenswrapper[4873]: I0121 00:17:11.441000 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-6bd24" Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.058016 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-6bd24"] Jan 21 00:17:15 crc kubenswrapper[4873]: W0121 00:17:15.069404 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fa9b095_3892_4ab4_9b8a_29ee7631bfc7.slice/crio-f0a5a6a3bc5eef359ca841a4362c0a2caae757d3475b81f2372b745c5fa75448 WatchSource:0}: Error finding container f0a5a6a3bc5eef359ca841a4362c0a2caae757d3475b81f2372b745c5fa75448: Status 404 returned error can't find the container with id f0a5a6a3bc5eef359ca841a4362c0a2caae757d3475b81f2372b745c5fa75448 Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.182380 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6bc699db7d-tmnp5"] Jan 21 00:17:15 crc kubenswrapper[4873]: W0121 00:17:15.186494 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f8b8dbe_76f7_4d11_8b5d_3b067b981141.slice/crio-2dafe05e6b2fbe96fa0b6a99ff3b5c57474e92b5e476a9e978f2bc3b253a7f53 WatchSource:0}: Error finding container 2dafe05e6b2fbe96fa0b6a99ff3b5c57474e92b5e476a9e978f2bc3b253a7f53: Status 404 returned error can't find the container with id 2dafe05e6b2fbe96fa0b6a99ff3b5c57474e92b5e476a9e978f2bc3b253a7f53 Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.978125 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" event={"ID":"bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e","Type":"ContainerStarted","Data":"0299b3d29e0c67efcc0bac067b5c30858af71fc7a8feaabb06e39b84c1240800"} Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.980007 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" event={"ID":"8f8b8dbe-76f7-4d11-8b5d-3b067b981141","Type":"ContainerStarted","Data":"2dafe05e6b2fbe96fa0b6a99ff3b5c57474e92b5e476a9e978f2bc3b253a7f53"} Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.981568 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" event={"ID":"35cf15d9-a01a-4ffe-a81a-efd6f7f974ca","Type":"ContainerStarted","Data":"c5d4c314797131dff9776091df1472b11282826942e3ac31f8ebb0a2603f4689"} Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.981818 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.982498 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-6bd24" event={"ID":"0fa9b095-3892-4ab4-9b8a-29ee7631bfc7","Type":"ContainerStarted","Data":"f0a5a6a3bc5eef359ca841a4362c0a2caae757d3475b81f2372b745c5fa75448"} Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.984584 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-t4psx" event={"ID":"3d39788c-bdb9-4216-80c3-da758b12627a","Type":"ContainerStarted","Data":"34f7e581bc0ab5bf1735909a8c6e9f5df3ad21a7315272291f3bf2a0c0e9c3c6"} Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.984716 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-t4psx" Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.986046 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s" event={"ID":"2e96a2df-caed-45ec-967c-ec7ac2705c55","Type":"ContainerStarted","Data":"2dffdce49c6492228d374f49ab6c127841326ff46272bd1c2005cfc0e57b008a"} Jan 21 00:17:15 crc kubenswrapper[4873]: I0121 00:17:15.988445 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" event={"ID":"4e79fe55-0691-4477-b520-dd1d567bc5f0","Type":"ContainerStarted","Data":"1c63d6d341707958770b1a8c77168573d40e289ec5061290f683f47aee9e86b0"} Jan 21 00:17:16 crc kubenswrapper[4873]: I0121 00:17:16.012109 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t" podStartSLOduration=2.6694668459999997 podStartE2EDuration="16.012086561s" podCreationTimestamp="2026-01-21 00:17:00 +0000 UTC" firstStartedPulling="2026-01-21 00:17:01.331171132 +0000 UTC m=+653.571038778" lastFinishedPulling="2026-01-21 00:17:14.673790847 +0000 UTC m=+666.913658493" observedRunningTime="2026-01-21 00:17:16.006995458 +0000 UTC m=+668.246863114" watchObservedRunningTime="2026-01-21 00:17:16.012086561 +0000 UTC m=+668.251954217" Jan 21 00:17:16 crc kubenswrapper[4873]: I0121 00:17:16.034333 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rtw4s" podStartSLOduration=2.69129158 podStartE2EDuration="16.034315556s" podCreationTimestamp="2026-01-21 00:17:00 +0000 UTC" firstStartedPulling="2026-01-21 00:17:01.331225684 +0000 UTC m=+653.571093330" lastFinishedPulling="2026-01-21 00:17:14.67424964 +0000 UTC m=+666.914117306" observedRunningTime="2026-01-21 00:17:16.031991135 +0000 UTC m=+668.271858781" watchObservedRunningTime="2026-01-21 00:17:16.034315556 +0000 UTC m=+668.274183202" Jan 21 00:17:16 crc kubenswrapper[4873]: I0121 00:17:16.037461 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" Jan 21 00:17:16 crc kubenswrapper[4873]: I0121 00:17:16.063816 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-t4psx" podStartSLOduration=2.941848744 podStartE2EDuration="16.063801532s" podCreationTimestamp="2026-01-21 00:17:00 +0000 UTC" firstStartedPulling="2026-01-21 00:17:01.550738101 +0000 UTC m=+653.790605747" lastFinishedPulling="2026-01-21 00:17:14.672690869 +0000 UTC m=+666.912558535" observedRunningTime="2026-01-21 00:17:16.062777135 +0000 UTC m=+668.302644781" watchObservedRunningTime="2026-01-21 00:17:16.063801532 +0000 UTC m=+668.303669178" Jan 21 00:17:16 crc kubenswrapper[4873]: I0121 00:17:16.104914 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9q2cg" podStartSLOduration=2.832273843 podStartE2EDuration="16.104894755s" podCreationTimestamp="2026-01-21 00:17:00 +0000 UTC" firstStartedPulling="2026-01-21 00:17:01.502827401 +0000 UTC m=+653.742695037" lastFinishedPulling="2026-01-21 00:17:14.775448303 +0000 UTC m=+667.015315949" observedRunningTime="2026-01-21 00:17:16.09751696 +0000 UTC m=+668.337384606" watchObservedRunningTime="2026-01-21 00:17:16.104894755 +0000 UTC m=+668.344762411" Jan 21 00:17:16 crc kubenswrapper[4873]: I0121 00:17:16.133196 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4" podStartSLOduration=2.747999173 podStartE2EDuration="16.133175109s" podCreationTimestamp="2026-01-21 00:17:00 +0000 UTC" firstStartedPulling="2026-01-21 00:17:01.333810022 +0000 UTC m=+653.573677668" lastFinishedPulling="2026-01-21 00:17:14.718985958 +0000 UTC m=+666.958853604" observedRunningTime="2026-01-21 00:17:16.130832897 +0000 UTC m=+668.370700543" watchObservedRunningTime="2026-01-21 00:17:16.133175109 +0000 UTC m=+668.373042755" Jan 21 00:17:19 crc kubenswrapper[4873]: I0121 00:17:19.007359 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" event={"ID":"8f8b8dbe-76f7-4d11-8b5d-3b067b981141","Type":"ContainerStarted","Data":"9ec1345e1b0f3ec61a67222b22db5aafc7024347f7fc5c1b30e6402e54f1100f"} Jan 21 00:17:19 crc kubenswrapper[4873]: I0121 00:17:19.041076 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6bc699db7d-tmnp5" podStartSLOduration=9.107589396 podStartE2EDuration="12.041057334s" podCreationTimestamp="2026-01-21 00:17:07 +0000 UTC" firstStartedPulling="2026-01-21 00:17:15.188865744 +0000 UTC m=+667.428733380" lastFinishedPulling="2026-01-21 00:17:18.122333672 +0000 UTC m=+670.362201318" observedRunningTime="2026-01-21 00:17:19.039716998 +0000 UTC m=+671.279584654" watchObservedRunningTime="2026-01-21 00:17:19.041057334 +0000 UTC m=+671.280924980" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.187957 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.193060 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.196134 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-4r2sp" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.196219 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.196258 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.196348 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.196380 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.196531 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.196606 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.196638 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.196780 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.211347 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342287 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342344 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342375 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342410 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342438 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342463 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342497 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342727 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342752 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342779 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342812 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342834 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d9a84d57-1412-491b-94d9-28fd35610566-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342890 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342928 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.342951 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.443840 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.443914 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.443952 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444003 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444038 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444070 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444121 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444163 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444196 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d9a84d57-1412-491b-94d9-28fd35610566-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444271 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444323 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444352 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444392 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444424 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.444454 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.451993 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.452610 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.453239 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.454391 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.455298 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.455849 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.456806 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.457097 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.458874 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.460832 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.461124 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a84d57-1412-491b-94d9-28fd35610566-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.462787 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.463765 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.466833 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d9a84d57-1412-491b-94d9-28fd35610566-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.474321 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d9a84d57-1412-491b-94d9-28fd35610566-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d9a84d57-1412-491b-94d9-28fd35610566\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:20 crc kubenswrapper[4873]: I0121 00:17:20.550168 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.206344 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-t4psx" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.583398 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh"] Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.584317 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.585998 4873 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-dg8n2" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.586074 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.586164 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.604537 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh"] Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.659927 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rsx5\" (UniqueName: \"kubernetes.io/projected/1fc313ff-2cbd-4ce5-a5fe-33ca2918220c-kube-api-access-8rsx5\") pod \"cert-manager-operator-controller-manager-5446d6888b-hsqvh\" (UID: \"1fc313ff-2cbd-4ce5-a5fe-33ca2918220c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.660093 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fc313ff-2cbd-4ce5-a5fe-33ca2918220c-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-hsqvh\" (UID: \"1fc313ff-2cbd-4ce5-a5fe-33ca2918220c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.761821 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rsx5\" (UniqueName: \"kubernetes.io/projected/1fc313ff-2cbd-4ce5-a5fe-33ca2918220c-kube-api-access-8rsx5\") pod \"cert-manager-operator-controller-manager-5446d6888b-hsqvh\" (UID: \"1fc313ff-2cbd-4ce5-a5fe-33ca2918220c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.761880 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fc313ff-2cbd-4ce5-a5fe-33ca2918220c-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-hsqvh\" (UID: \"1fc313ff-2cbd-4ce5-a5fe-33ca2918220c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.762401 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fc313ff-2cbd-4ce5-a5fe-33ca2918220c-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-hsqvh\" (UID: \"1fc313ff-2cbd-4ce5-a5fe-33ca2918220c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.784396 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rsx5\" (UniqueName: \"kubernetes.io/projected/1fc313ff-2cbd-4ce5-a5fe-33ca2918220c-kube-api-access-8rsx5\") pod \"cert-manager-operator-controller-manager-5446d6888b-hsqvh\" (UID: \"1fc313ff-2cbd-4ce5-a5fe-33ca2918220c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" Jan 21 00:17:21 crc kubenswrapper[4873]: I0121 00:17:21.909797 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" Jan 21 00:17:25 crc kubenswrapper[4873]: I0121 00:17:25.264731 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh"] Jan 21 00:17:25 crc kubenswrapper[4873]: I0121 00:17:25.482137 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 00:17:26 crc kubenswrapper[4873]: I0121 00:17:26.059251 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d9a84d57-1412-491b-94d9-28fd35610566","Type":"ContainerStarted","Data":"9fe060f6a0a6001b8ba59e45285c05ea15978a96b39c736a0c519d40b7663159"} Jan 21 00:17:26 crc kubenswrapper[4873]: I0121 00:17:26.061307 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-6bd24" event={"ID":"0fa9b095-3892-4ab4-9b8a-29ee7631bfc7","Type":"ContainerStarted","Data":"17dd6f3a71d865f7ab808d8e550226c21d23b74fc72c60945faeaccdab3984f7"} Jan 21 00:17:26 crc kubenswrapper[4873]: I0121 00:17:26.069514 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" event={"ID":"1fc313ff-2cbd-4ce5-a5fe-33ca2918220c","Type":"ContainerStarted","Data":"abee1216b198492bfbd51265c7f8251b296bc522027c97c1083a7ea83b439d7d"} Jan 21 00:17:26 crc kubenswrapper[4873]: I0121 00:17:26.081202 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-6bd24" podStartSLOduration=5.083451921 podStartE2EDuration="15.081187609s" podCreationTimestamp="2026-01-21 00:17:11 +0000 UTC" firstStartedPulling="2026-01-21 00:17:15.071618449 +0000 UTC m=+667.311486085" lastFinishedPulling="2026-01-21 00:17:25.069354127 +0000 UTC m=+677.309221773" observedRunningTime="2026-01-21 00:17:26.079454372 +0000 UTC m=+678.319322018" watchObservedRunningTime="2026-01-21 00:17:26.081187609 +0000 UTC m=+678.321055255" Jan 21 00:17:27 crc kubenswrapper[4873]: I0121 00:17:27.940759 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 00:17:27 crc kubenswrapper[4873]: I0121 00:17:27.941796 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:27 crc kubenswrapper[4873]: I0121 00:17:27.945569 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Jan 21 00:17:27 crc kubenswrapper[4873]: I0121 00:17:27.945573 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Jan 21 00:17:27 crc kubenswrapper[4873]: I0121 00:17:27.945693 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-g2sj2" Jan 21 00:17:27 crc kubenswrapper[4873]: I0121 00:17:27.945850 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Jan 21 00:17:27 crc kubenswrapper[4873]: I0121 00:17:27.970786 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076197 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076254 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076328 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076352 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076379 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076401 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4hgt\" (UniqueName: \"kubernetes.io/projected/dd7d1658-7b47-462b-8049-cbf740580a3e-kube-api-access-s4hgt\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076436 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076456 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076494 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076563 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076596 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.076618 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178209 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178269 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178292 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178309 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hgt\" (UniqueName: \"kubernetes.io/projected/dd7d1658-7b47-462b-8049-cbf740580a3e-kube-api-access-s4hgt\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178336 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178351 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178382 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178406 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178429 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178445 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178462 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.178488 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.179946 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.180067 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.180112 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.180640 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.181238 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.183213 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.183545 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.183840 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.183708 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.188976 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.189376 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.201172 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hgt\" (UniqueName: \"kubernetes.io/projected/dd7d1658-7b47-462b-8049-cbf740580a3e-kube-api-access-s4hgt\") pod \"service-telemetry-operator-1-build\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.257520 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:28 crc kubenswrapper[4873]: I0121 00:17:28.632609 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 00:17:28 crc kubenswrapper[4873]: W0121 00:17:28.658483 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd7d1658_7b47_462b_8049_cbf740580a3e.slice/crio-96668b00b268a49a2369f1e0c7b684c003c47d0865c476375257dfc52d82b678 WatchSource:0}: Error finding container 96668b00b268a49a2369f1e0c7b684c003c47d0865c476375257dfc52d82b678: Status 404 returned error can't find the container with id 96668b00b268a49a2369f1e0c7b684c003c47d0865c476375257dfc52d82b678 Jan 21 00:17:29 crc kubenswrapper[4873]: I0121 00:17:29.106770 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"dd7d1658-7b47-462b-8049-cbf740580a3e","Type":"ContainerStarted","Data":"96668b00b268a49a2369f1e0c7b684c003c47d0865c476375257dfc52d82b678"} Jan 21 00:17:38 crc kubenswrapper[4873]: I0121 00:17:38.118362 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.019600 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.020964 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.023839 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.023986 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.028324 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.043039 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.156744 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.156802 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.156826 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.156860 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.156880 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.156909 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvbfp\" (UniqueName: \"kubernetes.io/projected/81cf4980-c18d-4e80-aa20-3cac32d11da9-kube-api-access-zvbfp\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.157155 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.157302 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.157390 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.157442 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.157491 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.157633 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258624 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvbfp\" (UniqueName: \"kubernetes.io/projected/81cf4980-c18d-4e80-aa20-3cac32d11da9-kube-api-access-zvbfp\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258689 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258727 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258759 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258778 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258798 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258825 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258850 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258890 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258922 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258950 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.258976 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.259154 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.259282 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.259433 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.259544 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.260299 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.260520 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.260753 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.263858 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.264247 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.269070 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.273576 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.279304 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvbfp\" (UniqueName: \"kubernetes.io/projected/81cf4980-c18d-4e80-aa20-3cac32d11da9-kube-api-access-zvbfp\") pod \"service-telemetry-operator-2-build\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:40 crc kubenswrapper[4873]: I0121 00:17:40.338276 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:47 crc kubenswrapper[4873]: E0121 00:17:47.038714 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe" Jan 21 00:17:47 crc kubenswrapper[4873]: E0121 00:17:47.039160 4873 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 00:17:47 crc kubenswrapper[4873]: init container &Container{Name:manage-dockerfile,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe,Command:[],Args:[openshift-manage-dockerfile --v=0],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:BUILD,Value:{"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"service-telemetry-operator-1","namespace":"service-telemetry","uid":"c77cfe42-0c2b-4100-941a-bffaa9b6d48f","resourceVersion":"33548","generation":1,"creationTimestamp":"2026-01-21T00:17:27Z","labels":{"build":"service-telemetry-operator","buildconfig":"service-telemetry-operator","openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"service-telemetry-operator","uid":"9ed980e1-b1f0-4951-98a4-e470660aa7c2","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2026-01-21T00:17:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:build":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9ed980e1-b1f0-4951-98a4-e470660aa7c2\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:dockerfile":{},"f:type":{}},"f:strategy":{"f:dockerStrategy":{".":{},"f:from":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM quay.io/operator-framework/ansible-operator:v1.38.1\n\n# temporarily switch to root user to adjust image layers\nUSER 0\n# Upstream CI builds need the additional EPEL sources for python3-passlib and python3-bcrypt but have no working repos to install epel-release\n# NO_PROXY is undefined in upstream CI builds, but defined (usually blank) during openshift builds (a possibly brittle hack)\nRUN bash -c -- 'if [ \"${NO_PROXY:-__ZZZZZ}\" == \"__ZZZZZ\" ]; then echo \"Applying upstream EPEL hacks\" \u0026\u0026 echo -e \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\nmQINBGE3mOsBEACsU+XwJWDJVkItBaugXhXIIkb9oe+7aadELuVo0kBmc3HXt/Yp\\nCJW9hHEiGZ6z2jwgPqyJjZhCvcAWvgzKcvqE+9i0NItV1rzfxrBe2BtUtZmVcuE6\\n2b+SPfxQ2Hr8llaawRjt8BCFX/ZzM4/1Qk+EzlfTcEcpkMf6wdO7kD6ulBk/tbsW\\nDHX2lNcxszTf+XP9HXHWJlA2xBfP+Dk4gl4DnO2Y1xR0OSywE/QtvEbN5cY94ieu\\nn7CBy29AleMhmbnx9pw3NyxcFIAsEZHJoU4ZW9ulAJ/ogttSyAWeacW7eJGW31/Z\\n39cS+I4KXJgeGRI20RmpqfH0tuT+X5Da59YpjYxkbhSK3HYBVnNPhoJFUc2j5iKy\\nXLgkapu1xRnEJhw05kr4LCbud0NTvfecqSqa+59kuVc+zWmfTnGTYc0PXZ6Oa3rK\\n44UOmE6eAT5zd/ToleDO0VesN+EO7CXfRsm7HWGpABF5wNK3vIEF2uRr2VJMvgqS\\n9eNwhJyOzoca4xFSwCkc6dACGGkV+CqhufdFBhmcAsUotSxe3zmrBjqA0B/nxIvH\\nDVgOAMnVCe+Lmv8T0mFgqZSJdIUdKjnOLu/GRFhjDKIak4jeMBMTYpVnU+HhMHLq\\nuDiZkNEvEEGhBQmZuI8J55F/a6UURnxUwT3piyi3Pmr2IFD7ahBxPzOBCQARAQAB\\ntCdGZWRvcmEgKGVwZWw5KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAk4EEwEI\\nADgWIQT/itE0RZcQbs6BO5GKOHK/MihGfAUCYTeY6wIbDwULCQgHAgYVCgkICwIE\\nFgIDAQIeAQIXgAAKCRCKOHK/MihGfFX/EACBPWv20+ttYu1A5WvtHJPzwbj0U4yF\\n3zTQpBglQ2UfkRpYdipTlT3Ih6j5h2VmgRPtINCc/ZE28adrWpBoeFIS2YAKOCLC\\nnZYtHl2nCoLq1U7FSttUGsZ/t8uGCBgnugTfnIYcmlP1jKKA6RJAclK89evDQX5n\\nR9ZD+Cq3CBMlttvSTCht0qQVlwycedH8iWyYgP/mF0W35BIn7NuuZwWhgR00n/VG\\n4nbKPOzTWbsP45awcmivdrS74P6mL84WfkghipdmcoyVb1B8ZP4Y/Ke0RXOnLhNe\\nCfrXXvuW+Pvg2RTfwRDtehGQPAgXbmLmz2ZkV69RGIr54HJv84NDbqZovRTMr7gL\\n9k3ciCzXCiYQgM8yAyGHV0KEhFSQ1HV7gMnt9UmxbxBE2pGU7vu3CwjYga5DpwU7\\nw5wu1TmM5KgZtZvuWOTDnqDLf0cKoIbW8FeeCOn24elcj32bnQDuF9DPey1mqcvT\\n/yEo/Ushyz6CVYxN8DGgcy2M9JOsnmjDx02h6qgWGWDuKgb9jZrvRedpAQCeemEd\\nfhEs6ihqVxRFl16HxC4EVijybhAL76SsM2nbtIqW1apBQJQpXWtQwwdvgTVpdEtE\\nr4ArVJYX5LrswnWEQMOelugUG6S3ZjMfcyOa/O0364iY73vyVgaYK+2XtT2usMux\\nVL469Kj5m13T6w==\\n=Mjs/\\n-----END PGP PUBLIC KEY BLOCK-----\" \u003e /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9 \u0026\u0026 echo -e \"[epel]\\nname=Extra Packages for Enterprise Linux 9 - \\$basearch\\nmetalink=https://mirrors.fedoraproject.org/metalink?repo=epel-9\u0026arch=\\$basearch\u0026infra=\\$infra\u0026content=\\$contentdir\\nenabled=1\\ngpgcheck=1\\ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9\" \u003e /etc/yum.repos.d/epel.repo; fi'\n\n# update the base image to allow forward-looking optimistic updates during the testing phase, with the added benefit of helping move closer to passing security scans.\n# -- excludes ansible so it remains at 2.9 tag as shipped with the base image\n# -- installs python3-passlib and python3-bcrypt for oauth-proxy interface\n# -- cleans up the cached data from dnf to keep the image as small as possible\nRUN dnf update -y --exclude=ansible* \u0026\u0026 dnf install -y python3-passlib python3-bcrypt \u0026\u0026 dnf clean all \u0026\u0026 rm -rf /var/cache/dnf\n\nCOPY requirements.yml ${HOME}/requirements.yml\nRUN ansible-galaxy collection install -r ${HOME}/requirements.yml \\\n \u0026\u0026 chmod -R ug+rwx ${HOME}/.ansible\n\n# switch back to user 1001 when running the base image (non-root)\nUSER 1001\n\n# copy in required artifacts for the operator\nCOPY watches.yaml ${HOME}/watches.yaml\nCOPY roles/ ${HOME}/roles/\n"},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"quay.io/operator-framework/ansible-operator@sha256:9895727b7f66bb88fa4c6afdefc7eecf86e6b7c1293920f866a035da9decc58e"},"pullSecret":{"name":"builder-dockercfg-g2sj2"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest"},"pushSecret":{"name":"builder-dockercfg-g2sj2"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"quay.io/operator-framework/ansible-operator@sha256:9895727b7f66bb88fa4c6afdefc7eecf86e6b7c1293920f866a035da9decc58e","fromRef":{"kind":"ImageStreamTag","name":"ansible-operator:v1.38.1"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest","config":{"kind":"BuildConfig","namespace":"service-telemetry","name":"service-telemetry-operator"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2026-01-21T00:17:27Z","lastTransitionTime":"2026-01-21T00:17:27Z"}]}} Jan 21 00:17:47 crc kubenswrapper[4873]: ,ValueFrom:nil,},EnvVar{Name:LANG,Value:C.utf8,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/registries.conf,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_DIR_PATH,Value:/var/run/configs/openshift.io/build-system/registries.d,ValueFrom:nil,},EnvVar{Name:BUILD_SIGNATURE_POLICY_PATH,Value:/var/run/configs/openshift.io/build-system/policy.json,ValueFrom:nil,},EnvVar{Name:BUILD_STORAGE_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/storage.conf,ValueFrom:nil,},EnvVar{Name:BUILD_BLOBCACHE_DIR,Value:/var/cache/blobs,ValueFrom:nil,},EnvVar{Name:HTTP_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:http_proxy,Value:,ValueFrom:nil,},EnvVar{Name:HTTPS_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:https_proxy,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:no_proxy,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:buildworkdir,ReadOnly:false,MountPath:/tmp/build,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-system-configs,ReadOnly:true,MountPath:/var/run/configs/openshift.io/build-system,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-proxy-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-blob-cache,ReadOnly:false,MountPath:/var/cache/blobs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4hgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[CHOWN DAC_OVERRIDE],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-1-build_service-telemetry(dd7d1658-7b47-462b-8049-cbf740580a3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 21 00:17:47 crc kubenswrapper[4873]: > logger="UnhandledError" Jan 21 00:17:47 crc kubenswrapper[4873]: E0121 00:17:47.040341 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manage-dockerfile\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-1-build" podUID="dd7d1658-7b47-462b-8049-cbf740580a3e" Jan 21 00:17:47 crc kubenswrapper[4873]: E0121 00:17:47.737719 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911" Jan 21 00:17:47 crc kubenswrapper[4873]: E0121 00:17:47.738190 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-operator,Image:registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911,Command:[/usr/bin/cert-manager-operator],Args:[start --v=$(OPERATOR_LOG_LEVEL) --trusted-ca-configmap=$(TRUSTED_CA_CONFIGMAP_NAME) --cloud-credentials-secret=$(CLOUD_CREDENTIALS_SECRET_NAME) --unsupported-addon-features=$(UNSUPPORTED_ADDON_FEATURES)],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cert-manager-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_WEBHOOK,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CA_INJECTOR,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CONTROLLER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ACMESOLVER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-acmesolver-rhel9@sha256:ba937fc4b9eee31422914352c11a45b90754ba4fbe490ea45249b90afdc4e0a7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ISTIOCSR,Value:registry.redhat.io/cert-manager/cert-manager-istio-csr-rhel9@sha256:af1ac813b8ee414ef215936f05197bc498bccbd540f3e2a93cb522221ba112bc,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.18.3,ValueFrom:nil,},EnvVar{Name:ISTIOCSR_OPERAND_IMAGE_VERSION,Value:0.14.2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:1.18.0,ValueFrom:nil,},EnvVar{Name:OPERATOR_LOG_LEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:TRUSTED_CA_CONFIGMAP_NAME,Value:,ValueFrom:nil,},EnvVar{Name:CLOUD_CREDENTIALS_SECRET_NAME,Value:,ValueFrom:nil,},EnvVar{Name:UNSUPPORTED_ADDON_FEATURES,Value:,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cert-manager-operator.v1.18.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{33554432 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rsx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-operator-controller-manager-5446d6888b-hsqvh_cert-manager-operator(1fc313ff-2cbd-4ce5-a5fe-33ca2918220c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 00:17:47 crc kubenswrapper[4873]: E0121 00:17:47.739858 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" podUID="1fc313ff-2cbd-4ce5-a5fe-33ca2918220c" Jan 21 00:17:48 crc kubenswrapper[4873]: E0121 00:17:48.241465 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911\\\"\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" podUID="1fc313ff-2cbd-4ce5-a5fe-33ca2918220c" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.283293 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.400087 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"dd7d1658-7b47-462b-8049-cbf740580a3e","Type":"ContainerDied","Data":"96668b00b268a49a2369f1e0c7b684c003c47d0865c476375257dfc52d82b678"} Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.400180 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.421772 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-proxy-ca-bundles\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.421835 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-ca-bundles\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.421865 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-node-pullsecrets\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.421898 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-system-configs\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.421921 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4hgt\" (UniqueName: \"kubernetes.io/projected/dd7d1658-7b47-462b-8049-cbf740580a3e-kube-api-access-s4hgt\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.421970 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-root\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.422004 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-buildcachedir\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.422065 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-buildworkdir\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.422091 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-build-blob-cache\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.422117 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-push\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.422152 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-run\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.422182 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-pull\") pod \"dd7d1658-7b47-462b-8049-cbf740580a3e\" (UID: \"dd7d1658-7b47-462b-8049-cbf740580a3e\") " Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.422641 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.423072 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.423375 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.423390 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.423411 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.423587 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.423730 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.423839 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.424061 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.427379 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-pull" (OuterVolumeSpecName: "builder-dockercfg-g2sj2-pull") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "builder-dockercfg-g2sj2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.433159 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-push" (OuterVolumeSpecName: "builder-dockercfg-g2sj2-push") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "builder-dockercfg-g2sj2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.439166 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7d1658-7b47-462b-8049-cbf740580a3e-kube-api-access-s4hgt" (OuterVolumeSpecName: "kube-api-access-s4hgt") pod "dd7d1658-7b47-462b-8049-cbf740580a3e" (UID: "dd7d1658-7b47-462b-8049-cbf740580a3e"). InnerVolumeSpecName "kube-api-access-s4hgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532177 4873 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532211 4873 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532221 4873 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532229 4873 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532238 4873 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-push\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532246 4873 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd7d1658-7b47-462b-8049-cbf740580a3e-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532254 4873 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/dd7d1658-7b47-462b-8049-cbf740580a3e-builder-dockercfg-g2sj2-pull\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532263 4873 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532271 4873 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532279 4873 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd7d1658-7b47-462b-8049-cbf740580a3e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.532287 4873 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd7d1658-7b47-462b-8049-cbf740580a3e-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.533637 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4hgt\" (UniqueName: \"kubernetes.io/projected/dd7d1658-7b47-462b-8049-cbf740580a3e-kube-api-access-s4hgt\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.574999 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.758711 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 00:17:52 crc kubenswrapper[4873]: I0121 00:17:52.769038 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 00:17:53 crc kubenswrapper[4873]: I0121 00:17:53.407901 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"81cf4980-c18d-4e80-aa20-3cac32d11da9","Type":"ContainerStarted","Data":"cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735"} Jan 21 00:17:53 crc kubenswrapper[4873]: I0121 00:17:53.408355 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"81cf4980-c18d-4e80-aa20-3cac32d11da9","Type":"ContainerStarted","Data":"392830b77d49b28facb72a211415cec45bfb96f1346d757e9077262adbf3c590"} Jan 21 00:17:53 crc kubenswrapper[4873]: I0121 00:17:53.409332 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d9a84d57-1412-491b-94d9-28fd35610566","Type":"ContainerStarted","Data":"44da5df64bfcce9e50f2e6fe2d8b16ac2d08b6565619cf7335a63f00e45c0982"} Jan 21 00:17:53 crc kubenswrapper[4873]: E0121 00:17:53.459994 4873 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4728576123843522043, SKID=, AKID=50:3B:44:84:12:86:E1:DE:50:74:74:B4:E5:39:92:D5:EB:D4:A6:9D failed: x509: certificate signed by unknown authority" Jan 21 00:17:53 crc kubenswrapper[4873]: I0121 00:17:53.571799 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 00:17:53 crc kubenswrapper[4873]: I0121 00:17:53.636648 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 00:17:54 crc kubenswrapper[4873]: I0121 00:17:54.071542 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd7d1658-7b47-462b-8049-cbf740580a3e" path="/var/lib/kubelet/pods/dd7d1658-7b47-462b-8049-cbf740580a3e/volumes" Jan 21 00:17:54 crc kubenswrapper[4873]: I0121 00:17:54.415712 4873 generic.go:334] "Generic (PLEG): container finished" podID="d9a84d57-1412-491b-94d9-28fd35610566" containerID="44da5df64bfcce9e50f2e6fe2d8b16ac2d08b6565619cf7335a63f00e45c0982" exitCode=0 Jan 21 00:17:54 crc kubenswrapper[4873]: I0121 00:17:54.415811 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d9a84d57-1412-491b-94d9-28fd35610566","Type":"ContainerDied","Data":"44da5df64bfcce9e50f2e6fe2d8b16ac2d08b6565619cf7335a63f00e45c0982"} Jan 21 00:17:54 crc kubenswrapper[4873]: I0121 00:17:54.502114 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.424878 4873 generic.go:334] "Generic (PLEG): container finished" podID="d9a84d57-1412-491b-94d9-28fd35610566" containerID="7e3a31a54ade85df5dbe0ff50a6b9d8676c58d39bb0f84db716ef9130748b575" exitCode=0 Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.425425 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="81cf4980-c18d-4e80-aa20-3cac32d11da9" containerName="git-clone" containerID="cri-o://cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735" gracePeriod=30 Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.425506 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d9a84d57-1412-491b-94d9-28fd35610566","Type":"ContainerDied","Data":"7e3a31a54ade85df5dbe0ff50a6b9d8676c58d39bb0f84db716ef9130748b575"} Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.787976 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_81cf4980-c18d-4e80-aa20-3cac32d11da9/git-clone/0.log" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.788048 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.980001 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-node-pullsecrets\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.980351 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildworkdir\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.980381 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-system-configs\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.980406 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-ca-bundles\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.980532 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-proxy-ca-bundles\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.980593 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvbfp\" (UniqueName: \"kubernetes.io/projected/81cf4980-c18d-4e80-aa20-3cac32d11da9-kube-api-access-zvbfp\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.980667 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-root\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.980088 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.980844 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981015 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981022 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981054 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981078 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981132 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-blob-cache\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981163 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-push\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981181 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-run\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981212 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildcachedir\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981242 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-pull\") pod \"81cf4980-c18d-4e80-aa20-3cac32d11da9\" (UID: \"81cf4980-c18d-4e80-aa20-3cac32d11da9\") " Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981610 4873 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981917 4873 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981931 4873 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981942 4873 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981725 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981847 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981870 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.981999 4873 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.982009 4873 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.986392 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81cf4980-c18d-4e80-aa20-3cac32d11da9-kube-api-access-zvbfp" (OuterVolumeSpecName: "kube-api-access-zvbfp") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "kube-api-access-zvbfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.988103 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-pull" (OuterVolumeSpecName: "builder-dockercfg-g2sj2-pull") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "builder-dockercfg-g2sj2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:17:55 crc kubenswrapper[4873]: I0121 00:17:55.993752 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-push" (OuterVolumeSpecName: "builder-dockercfg-g2sj2-push") pod "81cf4980-c18d-4e80-aa20-3cac32d11da9" (UID: "81cf4980-c18d-4e80-aa20-3cac32d11da9"). InnerVolumeSpecName "builder-dockercfg-g2sj2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.083470 4873 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.083499 4873 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-push\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.083509 4873 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/81cf4980-c18d-4e80-aa20-3cac32d11da9-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.083518 4873 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/81cf4980-c18d-4e80-aa20-3cac32d11da9-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.083526 4873 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/81cf4980-c18d-4e80-aa20-3cac32d11da9-builder-dockercfg-g2sj2-pull\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.083535 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvbfp\" (UniqueName: \"kubernetes.io/projected/81cf4980-c18d-4e80-aa20-3cac32d11da9-kube-api-access-zvbfp\") on node \"crc\" DevicePath \"\"" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.431960 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_81cf4980-c18d-4e80-aa20-3cac32d11da9/git-clone/0.log" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.432014 4873 generic.go:334] "Generic (PLEG): container finished" podID="81cf4980-c18d-4e80-aa20-3cac32d11da9" containerID="cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735" exitCode=1 Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.432085 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.432118 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"81cf4980-c18d-4e80-aa20-3cac32d11da9","Type":"ContainerDied","Data":"cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735"} Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.432152 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"81cf4980-c18d-4e80-aa20-3cac32d11da9","Type":"ContainerDied","Data":"392830b77d49b28facb72a211415cec45bfb96f1346d757e9077262adbf3c590"} Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.432173 4873 scope.go:117] "RemoveContainer" containerID="cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.436478 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d9a84d57-1412-491b-94d9-28fd35610566","Type":"ContainerStarted","Data":"c4308f4312b8dd2dda313e2d78d633fbf9d61c1ced2989cf2cc0ded0982da2b5"} Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.437073 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.447482 4873 scope.go:117] "RemoveContainer" containerID="cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735" Jan 21 00:17:56 crc kubenswrapper[4873]: E0121 00:17:56.447944 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735\": container with ID starting with cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735 not found: ID does not exist" containerID="cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.447983 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735"} err="failed to get container status \"cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735\": rpc error: code = NotFound desc = could not find container \"cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735\": container with ID starting with cab6f35bb3fd26e2f7f14f28b62fba553b94319e3322ceac6571c66eaa366735 not found: ID does not exist" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.481807 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=9.27737707 podStartE2EDuration="36.481784968s" podCreationTimestamp="2026-01-21 00:17:20 +0000 UTC" firstStartedPulling="2026-01-21 00:17:25.488336554 +0000 UTC m=+677.728204200" lastFinishedPulling="2026-01-21 00:17:52.692744452 +0000 UTC m=+704.932612098" observedRunningTime="2026-01-21 00:17:56.471665342 +0000 UTC m=+708.711532988" watchObservedRunningTime="2026-01-21 00:17:56.481784968 +0000 UTC m=+708.721652634" Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.490885 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 00:17:56 crc kubenswrapper[4873]: I0121 00:17:56.495032 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 00:17:58 crc kubenswrapper[4873]: I0121 00:17:58.071370 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81cf4980-c18d-4e80-aa20-3cac32d11da9" path="/var/lib/kubelet/pods/81cf4980-c18d-4e80-aa20-3cac32d11da9/volumes" Jan 21 00:18:01 crc kubenswrapper[4873]: I0121 00:18:01.465106 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" event={"ID":"1fc313ff-2cbd-4ce5-a5fe-33ca2918220c","Type":"ContainerStarted","Data":"a52302102036f94ce96859de18f6c8a74467ea6ac9e631d6976311b0d803efd9"} Jan 21 00:18:01 crc kubenswrapper[4873]: I0121 00:18:01.485923 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-hsqvh" podStartSLOduration=4.797524972 podStartE2EDuration="40.485909144s" podCreationTimestamp="2026-01-21 00:17:21 +0000 UTC" firstStartedPulling="2026-01-21 00:17:25.294701348 +0000 UTC m=+677.534568984" lastFinishedPulling="2026-01-21 00:18:00.98308551 +0000 UTC m=+713.222953156" observedRunningTime="2026-01-21 00:18:01.485626067 +0000 UTC m=+713.725493713" watchObservedRunningTime="2026-01-21 00:18:01.485909144 +0000 UTC m=+713.725776790" Jan 21 00:18:01 crc kubenswrapper[4873]: I0121 00:18:01.630758 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:18:01 crc kubenswrapper[4873]: I0121 00:18:01.630816 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:18:05 crc kubenswrapper[4873]: I0121 00:18:05.739471 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="d9a84d57-1412-491b-94d9-28fd35610566" containerName="elasticsearch" probeResult="failure" output=< Jan 21 00:18:05 crc kubenswrapper[4873]: {"timestamp": "2026-01-21T00:18:05+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 00:18:05 crc kubenswrapper[4873]: > Jan 21 00:18:05 crc kubenswrapper[4873]: I0121 00:18:05.989586 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 21 00:18:05 crc kubenswrapper[4873]: E0121 00:18:05.989772 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81cf4980-c18d-4e80-aa20-3cac32d11da9" containerName="git-clone" Jan 21 00:18:05 crc kubenswrapper[4873]: I0121 00:18:05.989782 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="81cf4980-c18d-4e80-aa20-3cac32d11da9" containerName="git-clone" Jan 21 00:18:05 crc kubenswrapper[4873]: I0121 00:18:05.989901 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="81cf4980-c18d-4e80-aa20-3cac32d11da9" containerName="git-clone" Jan 21 00:18:05 crc kubenswrapper[4873]: I0121 00:18:05.990606 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:05.993742 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-ca" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:05.993784 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-sys-config" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:05.993850 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-g2sj2" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.006135 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-global-ca" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.020878 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.020967 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.021002 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.021029 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.021329 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.021366 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.025643 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122186 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhk5\" (UniqueName: \"kubernetes.io/projected/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-kube-api-access-fzhk5\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122244 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122278 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122298 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122318 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122346 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122372 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122386 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122411 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122429 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122444 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122464 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.122520 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.123106 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.123166 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.124321 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.128163 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.128447 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.223400 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.223453 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.223471 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.223497 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.223512 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.223539 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzhk5\" (UniqueName: \"kubernetes.io/projected/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-kube-api-access-fzhk5\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.224510 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.224741 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.224914 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.225346 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.225525 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.251241 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzhk5\" (UniqueName: \"kubernetes.io/projected/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-kube-api-access-fzhk5\") pod \"service-telemetry-operator-3-build\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.323809 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:06 crc kubenswrapper[4873]: I0121 00:18:06.762723 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.508798 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2a3f98df-a6ad-4b6d-9978-b05576dcfe91","Type":"ContainerStarted","Data":"2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf"} Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.509101 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2a3f98df-a6ad-4b6d-9978-b05576dcfe91","Type":"ContainerStarted","Data":"fbd223aad98697ba77c07545f61f7c8235ebf2ebd1dd6fd768ecae9d5f018b55"} Jan 21 00:18:07 crc kubenswrapper[4873]: E0121 00:18:07.649291 4873 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4728576123843522043, SKID=, AKID=50:3B:44:84:12:86:E1:DE:50:74:74:B4:E5:39:92:D5:EB:D4:A6:9D failed: x509: certificate signed by unknown authority" Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.711777 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj"] Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.712477 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.715936 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.715961 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.718028 4873 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-lg4m2" Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.729882 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj"] Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.842207 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdjs9\" (UniqueName: \"kubernetes.io/projected/b626d9c0-c895-4804-8bbf-fdc3cc0ffddc-kube-api-access-qdjs9\") pod \"cert-manager-cainjector-855d9ccff4-b4nhj\" (UID: \"b626d9c0-c895-4804-8bbf-fdc3cc0ffddc\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.842318 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b626d9c0-c895-4804-8bbf-fdc3cc0ffddc-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-b4nhj\" (UID: \"b626d9c0-c895-4804-8bbf-fdc3cc0ffddc\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.943332 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdjs9\" (UniqueName: \"kubernetes.io/projected/b626d9c0-c895-4804-8bbf-fdc3cc0ffddc-kube-api-access-qdjs9\") pod \"cert-manager-cainjector-855d9ccff4-b4nhj\" (UID: \"b626d9c0-c895-4804-8bbf-fdc3cc0ffddc\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.943476 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b626d9c0-c895-4804-8bbf-fdc3cc0ffddc-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-b4nhj\" (UID: \"b626d9c0-c895-4804-8bbf-fdc3cc0ffddc\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.962859 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b626d9c0-c895-4804-8bbf-fdc3cc0ffddc-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-b4nhj\" (UID: \"b626d9c0-c895-4804-8bbf-fdc3cc0ffddc\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" Jan 21 00:18:07 crc kubenswrapper[4873]: I0121 00:18:07.963113 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdjs9\" (UniqueName: \"kubernetes.io/projected/b626d9c0-c895-4804-8bbf-fdc3cc0ffddc-kube-api-access-qdjs9\") pod \"cert-manager-cainjector-855d9ccff4-b4nhj\" (UID: \"b626d9c0-c895-4804-8bbf-fdc3cc0ffddc\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" Jan 21 00:18:08 crc kubenswrapper[4873]: I0121 00:18:08.032956 4873 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-lg4m2" Jan 21 00:18:08 crc kubenswrapper[4873]: I0121 00:18:08.041674 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" Jan 21 00:18:08 crc kubenswrapper[4873]: W0121 00:18:08.434714 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb626d9c0_c895_4804_8bbf_fdc3cc0ffddc.slice/crio-44109723a599612b3850af3a5cc2e68b83a4fb08c5d3d202d53a9e81bf6e3294 WatchSource:0}: Error finding container 44109723a599612b3850af3a5cc2e68b83a4fb08c5d3d202d53a9e81bf6e3294: Status 404 returned error can't find the container with id 44109723a599612b3850af3a5cc2e68b83a4fb08c5d3d202d53a9e81bf6e3294 Jan 21 00:18:08 crc kubenswrapper[4873]: I0121 00:18:08.441982 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj"] Jan 21 00:18:08 crc kubenswrapper[4873]: I0121 00:18:08.516022 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" event={"ID":"b626d9c0-c895-4804-8bbf-fdc3cc0ffddc","Type":"ContainerStarted","Data":"44109723a599612b3850af3a5cc2e68b83a4fb08c5d3d202d53a9e81bf6e3294"} Jan 21 00:18:08 crc kubenswrapper[4873]: I0121 00:18:08.674355 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 21 00:18:09 crc kubenswrapper[4873]: I0121 00:18:09.523646 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="2a3f98df-a6ad-4b6d-9978-b05576dcfe91" containerName="git-clone" containerID="cri-o://2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf" gracePeriod=30 Jan 21 00:18:09 crc kubenswrapper[4873]: I0121 00:18:09.983599 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_2a3f98df-a6ad-4b6d-9978-b05576dcfe91/git-clone/0.log" Jan 21 00:18:09 crc kubenswrapper[4873]: I0121 00:18:09.983933 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012308 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-push\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012370 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-node-pullsecrets\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012394 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-root\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012411 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildworkdir\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012425 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildcachedir\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012445 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-run\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012465 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-system-configs\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012480 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-blob-cache\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012473 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012502 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-proxy-ca-bundles\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012525 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-ca-bundles\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012596 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzhk5\" (UniqueName: \"kubernetes.io/projected/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-kube-api-access-fzhk5\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012626 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-pull\") pod \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\" (UID: \"2a3f98df-a6ad-4b6d-9978-b05576dcfe91\") " Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012786 4873 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.012807 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.014886 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.014897 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.015379 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.015440 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.015678 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.015747 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.015779 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.020134 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-push" (OuterVolumeSpecName: "builder-dockercfg-g2sj2-push") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "builder-dockercfg-g2sj2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.020285 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-pull" (OuterVolumeSpecName: "builder-dockercfg-g2sj2-pull") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "builder-dockercfg-g2sj2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.020470 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-kube-api-access-fzhk5" (OuterVolumeSpecName: "kube-api-access-fzhk5") pod "2a3f98df-a6ad-4b6d-9978-b05576dcfe91" (UID: "2a3f98df-a6ad-4b6d-9978-b05576dcfe91"). InnerVolumeSpecName "kube-api-access-fzhk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114432 4873 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114475 4873 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114485 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzhk5\" (UniqueName: \"kubernetes.io/projected/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-kube-api-access-fzhk5\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114494 4873 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-pull\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114503 4873 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-builder-dockercfg-g2sj2-push\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114512 4873 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114520 4873 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114528 4873 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114536 4873 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114557 4873 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.114565 4873 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2a3f98df-a6ad-4b6d-9978-b05576dcfe91-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.532969 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_2a3f98df-a6ad-4b6d-9978-b05576dcfe91/git-clone/0.log" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.533006 4873 generic.go:334] "Generic (PLEG): container finished" podID="2a3f98df-a6ad-4b6d-9978-b05576dcfe91" containerID="2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf" exitCode=1 Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.533031 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2a3f98df-a6ad-4b6d-9978-b05576dcfe91","Type":"ContainerDied","Data":"2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf"} Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.533056 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2a3f98df-a6ad-4b6d-9978-b05576dcfe91","Type":"ContainerDied","Data":"fbd223aad98697ba77c07545f61f7c8235ebf2ebd1dd6fd768ecae9d5f018b55"} Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.533071 4873 scope.go:117] "RemoveContainer" containerID="2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.533161 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.562477 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.567873 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.570108 4873 scope.go:117] "RemoveContainer" containerID="2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf" Jan 21 00:18:10 crc kubenswrapper[4873]: E0121 00:18:10.570584 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf\": container with ID starting with 2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf not found: ID does not exist" containerID="2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.570620 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf"} err="failed to get container status \"2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf\": rpc error: code = NotFound desc = could not find container \"2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf\": container with ID starting with 2d7364f672d5049038d1a7e49b11fdf1625c0f5c436d8e56dc9e7c782b30ccbf not found: ID does not exist" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.646501 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="d9a84d57-1412-491b-94d9-28fd35610566" containerName="elasticsearch" probeResult="failure" output=< Jan 21 00:18:10 crc kubenswrapper[4873]: {"timestamp": "2026-01-21T00:18:10+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 00:18:10 crc kubenswrapper[4873]: > Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.703347 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-prt5l"] Jan 21 00:18:10 crc kubenswrapper[4873]: E0121 00:18:10.703649 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3f98df-a6ad-4b6d-9978-b05576dcfe91" containerName="git-clone" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.703671 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3f98df-a6ad-4b6d-9978-b05576dcfe91" containerName="git-clone" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.703808 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3f98df-a6ad-4b6d-9978-b05576dcfe91" containerName="git-clone" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.704340 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.706119 4873 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4268h" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.726432 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-prt5l"] Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.820414 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvnc\" (UniqueName: \"kubernetes.io/projected/8d55c464-d289-4042-a198-c80450070784-kube-api-access-dvvnc\") pod \"cert-manager-webhook-f4fb5df64-prt5l\" (UID: \"8d55c464-d289-4042-a198-c80450070784\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.820805 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d55c464-d289-4042-a198-c80450070784-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-prt5l\" (UID: \"8d55c464-d289-4042-a198-c80450070784\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.921611 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvnc\" (UniqueName: \"kubernetes.io/projected/8d55c464-d289-4042-a198-c80450070784-kube-api-access-dvvnc\") pod \"cert-manager-webhook-f4fb5df64-prt5l\" (UID: \"8d55c464-d289-4042-a198-c80450070784\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.921665 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d55c464-d289-4042-a198-c80450070784-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-prt5l\" (UID: \"8d55c464-d289-4042-a198-c80450070784\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.938142 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d55c464-d289-4042-a198-c80450070784-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-prt5l\" (UID: \"8d55c464-d289-4042-a198-c80450070784\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" Jan 21 00:18:10 crc kubenswrapper[4873]: I0121 00:18:10.938306 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvnc\" (UniqueName: \"kubernetes.io/projected/8d55c464-d289-4042-a198-c80450070784-kube-api-access-dvvnc\") pod \"cert-manager-webhook-f4fb5df64-prt5l\" (UID: \"8d55c464-d289-4042-a198-c80450070784\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" Jan 21 00:18:11 crc kubenswrapper[4873]: I0121 00:18:11.020211 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" Jan 21 00:18:11 crc kubenswrapper[4873]: I0121 00:18:11.456189 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-prt5l"] Jan 21 00:18:11 crc kubenswrapper[4873]: I0121 00:18:11.548333 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" event={"ID":"8d55c464-d289-4042-a198-c80450070784","Type":"ContainerStarted","Data":"0b0c92449587e74141583dee7ef78c1854f91fccfe80a5cbf85be38f20c45651"} Jan 21 00:18:12 crc kubenswrapper[4873]: I0121 00:18:12.070344 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3f98df-a6ad-4b6d-9978-b05576dcfe91" path="/var/lib/kubelet/pods/2a3f98df-a6ad-4b6d-9978-b05576dcfe91/volumes" Jan 21 00:18:15 crc kubenswrapper[4873]: I0121 00:18:15.902688 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.208060 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.209683 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.211671 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-g2sj2" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.211808 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-global-ca" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.212129 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-sys-config" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.214044 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-ca" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223573 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjv6\" (UniqueName: \"kubernetes.io/projected/848c9fff-eef3-484c-bce9-bd5661032ff4-kube-api-access-rzjv6\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223647 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223685 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223727 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223757 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223784 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223803 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223843 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223868 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223896 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223944 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.223968 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.237456 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.325012 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzjv6\" (UniqueName: \"kubernetes.io/projected/848c9fff-eef3-484c-bce9-bd5661032ff4-kube-api-access-rzjv6\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.325338 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.325467 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.325635 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.325748 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.325854 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.325964 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326074 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326181 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326282 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326310 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326089 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.325991 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326021 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326295 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.325928 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326483 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326491 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326518 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.326969 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.327322 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.333099 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.342071 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.368299 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzjv6\" (UniqueName: \"kubernetes.io/projected/848c9fff-eef3-484c-bce9-bd5661032ff4-kube-api-access-rzjv6\") pod \"service-telemetry-operator-4-build\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:20 crc kubenswrapper[4873]: I0121 00:18:20.566268 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:21 crc kubenswrapper[4873]: I0121 00:18:21.772957 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 21 00:18:21 crc kubenswrapper[4873]: W0121 00:18:21.782502 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod848c9fff_eef3_484c_bce9_bd5661032ff4.slice/crio-52c671ee2d6942533f0395e67a2434374baab4fd6a5ad66c808d9922f9facaab WatchSource:0}: Error finding container 52c671ee2d6942533f0395e67a2434374baab4fd6a5ad66c808d9922f9facaab: Status 404 returned error can't find the container with id 52c671ee2d6942533f0395e67a2434374baab4fd6a5ad66c808d9922f9facaab Jan 21 00:18:22 crc kubenswrapper[4873]: I0121 00:18:22.639378 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"848c9fff-eef3-484c-bce9-bd5661032ff4","Type":"ContainerStarted","Data":"52c671ee2d6942533f0395e67a2434374baab4fd6a5ad66c808d9922f9facaab"} Jan 21 00:18:22 crc kubenswrapper[4873]: I0121 00:18:22.641520 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" event={"ID":"b626d9c0-c895-4804-8bbf-fdc3cc0ffddc","Type":"ContainerStarted","Data":"3de8e87d1bc1cb27a8d00712f6116a1fb86e878423dd4ca944152ca8590aa0f0"} Jan 21 00:18:22 crc kubenswrapper[4873]: I0121 00:18:22.644087 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" event={"ID":"8d55c464-d289-4042-a198-c80450070784","Type":"ContainerStarted","Data":"351a00537a33f67d307711c57f4b04edb3ec52015a1a1b2373fd6ca5fe036585"} Jan 21 00:18:22 crc kubenswrapper[4873]: I0121 00:18:22.644188 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" Jan 21 00:18:22 crc kubenswrapper[4873]: I0121 00:18:22.660810 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-b4nhj" podStartSLOduration=2.650736773 podStartE2EDuration="15.660789774s" podCreationTimestamp="2026-01-21 00:18:07 +0000 UTC" firstStartedPulling="2026-01-21 00:18:08.440850763 +0000 UTC m=+720.680718409" lastFinishedPulling="2026-01-21 00:18:21.450903764 +0000 UTC m=+733.690771410" observedRunningTime="2026-01-21 00:18:22.658630526 +0000 UTC m=+734.898498202" watchObservedRunningTime="2026-01-21 00:18:22.660789774 +0000 UTC m=+734.900657430" Jan 21 00:18:22 crc kubenswrapper[4873]: I0121 00:18:22.688599 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" podStartSLOduration=2.724047196 podStartE2EDuration="12.688580414s" podCreationTimestamp="2026-01-21 00:18:10 +0000 UTC" firstStartedPulling="2026-01-21 00:18:11.484431234 +0000 UTC m=+723.724298880" lastFinishedPulling="2026-01-21 00:18:21.448964452 +0000 UTC m=+733.688832098" observedRunningTime="2026-01-21 00:18:22.684147266 +0000 UTC m=+734.924014922" watchObservedRunningTime="2026-01-21 00:18:22.688580414 +0000 UTC m=+734.928448070" Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.121216 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-chdlx"] Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.121956 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-chdlx" Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.124387 4873 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6rrfv" Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.131148 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-chdlx"] Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.304281 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m88gj\" (UniqueName: \"kubernetes.io/projected/c72aa9c7-1bed-469a-aac4-9809002c10af-kube-api-access-m88gj\") pod \"cert-manager-86cb77c54b-chdlx\" (UID: \"c72aa9c7-1bed-469a-aac4-9809002c10af\") " pod="cert-manager/cert-manager-86cb77c54b-chdlx" Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.304446 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c72aa9c7-1bed-469a-aac4-9809002c10af-bound-sa-token\") pod \"cert-manager-86cb77c54b-chdlx\" (UID: \"c72aa9c7-1bed-469a-aac4-9809002c10af\") " pod="cert-manager/cert-manager-86cb77c54b-chdlx" Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.406544 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m88gj\" (UniqueName: \"kubernetes.io/projected/c72aa9c7-1bed-469a-aac4-9809002c10af-kube-api-access-m88gj\") pod \"cert-manager-86cb77c54b-chdlx\" (UID: \"c72aa9c7-1bed-469a-aac4-9809002c10af\") " pod="cert-manager/cert-manager-86cb77c54b-chdlx" Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.406749 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c72aa9c7-1bed-469a-aac4-9809002c10af-bound-sa-token\") pod \"cert-manager-86cb77c54b-chdlx\" (UID: \"c72aa9c7-1bed-469a-aac4-9809002c10af\") " pod="cert-manager/cert-manager-86cb77c54b-chdlx" Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.435528 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m88gj\" (UniqueName: \"kubernetes.io/projected/c72aa9c7-1bed-469a-aac4-9809002c10af-kube-api-access-m88gj\") pod \"cert-manager-86cb77c54b-chdlx\" (UID: \"c72aa9c7-1bed-469a-aac4-9809002c10af\") " pod="cert-manager/cert-manager-86cb77c54b-chdlx" Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.437802 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c72aa9c7-1bed-469a-aac4-9809002c10af-bound-sa-token\") pod \"cert-manager-86cb77c54b-chdlx\" (UID: \"c72aa9c7-1bed-469a-aac4-9809002c10af\") " pod="cert-manager/cert-manager-86cb77c54b-chdlx" Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.449507 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-chdlx" Jan 21 00:18:24 crc kubenswrapper[4873]: I0121 00:18:24.981592 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-chdlx"] Jan 21 00:18:25 crc kubenswrapper[4873]: I0121 00:18:25.699119 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"848c9fff-eef3-484c-bce9-bd5661032ff4","Type":"ContainerStarted","Data":"3a246351c2b5d00ce4dec27df04abdfe84e4d8ae5777bf0eb31faec1ae5ad409"} Jan 21 00:18:25 crc kubenswrapper[4873]: I0121 00:18:25.700683 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-chdlx" event={"ID":"c72aa9c7-1bed-469a-aac4-9809002c10af","Type":"ContainerStarted","Data":"da7715a95b0c5c0388e8ccac762dabb7b7ef01133717975820b65df6f400359e"} Jan 21 00:18:25 crc kubenswrapper[4873]: E0121 00:18:25.748775 4873 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4728576123843522043, SKID=, AKID=50:3B:44:84:12:86:E1:DE:50:74:74:B4:E5:39:92:D5:EB:D4:A6:9D failed: x509: certificate signed by unknown authority" Jan 21 00:18:26 crc kubenswrapper[4873]: I0121 00:18:26.023062 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-prt5l" Jan 21 00:18:26 crc kubenswrapper[4873]: I0121 00:18:26.709389 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-chdlx" event={"ID":"c72aa9c7-1bed-469a-aac4-9809002c10af","Type":"ContainerStarted","Data":"34f41d29736d5cd8185fb4e88e393cc88d4e8dfda287284ae5f875b80e8a47f0"} Jan 21 00:18:26 crc kubenswrapper[4873]: I0121 00:18:26.727118 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-chdlx" podStartSLOduration=2.72708488 podStartE2EDuration="2.72708488s" podCreationTimestamp="2026-01-21 00:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:18:26.721191773 +0000 UTC m=+738.961059459" watchObservedRunningTime="2026-01-21 00:18:26.72708488 +0000 UTC m=+738.966952566" Jan 21 00:18:26 crc kubenswrapper[4873]: I0121 00:18:26.777756 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 21 00:18:27 crc kubenswrapper[4873]: I0121 00:18:27.716268 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="848c9fff-eef3-484c-bce9-bd5661032ff4" containerName="git-clone" containerID="cri-o://3a246351c2b5d00ce4dec27df04abdfe84e4d8ae5777bf0eb31faec1ae5ad409" gracePeriod=30 Jan 21 00:18:27 crc kubenswrapper[4873]: E0121 00:18:27.815137 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod848c9fff_eef3_484c_bce9_bd5661032ff4.slice/crio-3a246351c2b5d00ce4dec27df04abdfe84e4d8ae5777bf0eb31faec1ae5ad409.scope\": RecentStats: unable to find data in memory cache]" Jan 21 00:18:29 crc kubenswrapper[4873]: I0121 00:18:29.743967 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_848c9fff-eef3-484c-bce9-bd5661032ff4/git-clone/0.log" Jan 21 00:18:29 crc kubenswrapper[4873]: I0121 00:18:29.744387 4873 generic.go:334] "Generic (PLEG): container finished" podID="848c9fff-eef3-484c-bce9-bd5661032ff4" containerID="3a246351c2b5d00ce4dec27df04abdfe84e4d8ae5777bf0eb31faec1ae5ad409" exitCode=1 Jan 21 00:18:29 crc kubenswrapper[4873]: I0121 00:18:29.744431 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"848c9fff-eef3-484c-bce9-bd5661032ff4","Type":"ContainerDied","Data":"3a246351c2b5d00ce4dec27df04abdfe84e4d8ae5777bf0eb31faec1ae5ad409"} Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.089306 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_848c9fff-eef3-484c-bce9-bd5661032ff4/git-clone/0.log" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.089400 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146300 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-run\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146356 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-pull\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146383 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-ca-bundles\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146417 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-system-configs\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146445 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzjv6\" (UniqueName: \"kubernetes.io/projected/848c9fff-eef3-484c-bce9-bd5661032ff4-kube-api-access-rzjv6\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146468 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-build-blob-cache\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146496 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-buildworkdir\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146534 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-node-pullsecrets\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146589 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-buildcachedir\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146626 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-push\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146649 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-proxy-ca-bundles\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146678 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-root\") pod \"848c9fff-eef3-484c-bce9-bd5661032ff4\" (UID: \"848c9fff-eef3-484c-bce9-bd5661032ff4\") " Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146687 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146687 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.146936 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.147080 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.148150 4873 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.148181 4873 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/848c9fff-eef3-484c-bce9-bd5661032ff4-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.148195 4873 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.148209 4873 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.149920 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.150021 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.150043 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.150113 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.150944 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.152817 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-pull" (OuterVolumeSpecName: "builder-dockercfg-g2sj2-pull") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "builder-dockercfg-g2sj2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.152914 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-push" (OuterVolumeSpecName: "builder-dockercfg-g2sj2-push") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "builder-dockercfg-g2sj2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.153475 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848c9fff-eef3-484c-bce9-bd5661032ff4-kube-api-access-rzjv6" (OuterVolumeSpecName: "kube-api-access-rzjv6") pod "848c9fff-eef3-484c-bce9-bd5661032ff4" (UID: "848c9fff-eef3-484c-bce9-bd5661032ff4"). InnerVolumeSpecName "kube-api-access-rzjv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.250437 4873 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-push\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.250510 4873 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.250519 4873 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.250528 4873 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/848c9fff-eef3-484c-bce9-bd5661032ff4-builder-dockercfg-g2sj2-pull\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.250540 4873 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/848c9fff-eef3-484c-bce9-bd5661032ff4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.250564 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzjv6\" (UniqueName: \"kubernetes.io/projected/848c9fff-eef3-484c-bce9-bd5661032ff4-kube-api-access-rzjv6\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.250573 4873 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.250582 4873 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/848c9fff-eef3-484c-bce9-bd5661032ff4-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.754144 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_848c9fff-eef3-484c-bce9-bd5661032ff4/git-clone/0.log" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.754252 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"848c9fff-eef3-484c-bce9-bd5661032ff4","Type":"ContainerDied","Data":"52c671ee2d6942533f0395e67a2434374baab4fd6a5ad66c808d9922f9facaab"} Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.754319 4873 scope.go:117] "RemoveContainer" containerID="3a246351c2b5d00ce4dec27df04abdfe84e4d8ae5777bf0eb31faec1ae5ad409" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.754382 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.797146 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 21 00:18:30 crc kubenswrapper[4873]: I0121 00:18:30.803120 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 21 00:18:31 crc kubenswrapper[4873]: I0121 00:18:31.630773 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:18:31 crc kubenswrapper[4873]: I0121 00:18:31.631141 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:18:32 crc kubenswrapper[4873]: I0121 00:18:32.069266 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848c9fff-eef3-484c-bce9-bd5661032ff4" path="/var/lib/kubelet/pods/848c9fff-eef3-484c-bce9-bd5661032ff4/volumes" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.357182 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 21 00:18:38 crc kubenswrapper[4873]: E0121 00:18:38.357756 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848c9fff-eef3-484c-bce9-bd5661032ff4" containerName="git-clone" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.357770 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="848c9fff-eef3-484c-bce9-bd5661032ff4" containerName="git-clone" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.357875 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="848c9fff-eef3-484c-bce9-bd5661032ff4" containerName="git-clone" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.358730 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.360459 4873 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-g2sj2" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.361241 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-global-ca" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.361355 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-ca" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.365686 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-sys-config" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.377853 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458591 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458662 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458712 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458738 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458784 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458862 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458887 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458908 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458926 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458946 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458961 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.458976 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngmgj\" (UniqueName: \"kubernetes.io/projected/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-kube-api-access-ngmgj\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.560638 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.560873 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.560978 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.561375 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.561487 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngmgj\" (UniqueName: \"kubernetes.io/projected/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-kube-api-access-ngmgj\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.561856 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.562719 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.561403 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.561749 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.561327 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.561441 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.563212 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.563319 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.563370 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.563383 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.563410 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.563450 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.563479 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.563870 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.563875 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.564481 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.567766 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-push\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.567935 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.577970 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngmgj\" (UniqueName: \"kubernetes.io/projected/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-kube-api-access-ngmgj\") pod \"service-telemetry-operator-5-build\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:38 crc kubenswrapper[4873]: I0121 00:18:38.678963 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:39 crc kubenswrapper[4873]: I0121 00:18:39.144099 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 21 00:18:39 crc kubenswrapper[4873]: W0121 00:18:39.148380 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8b18a18_676f_4d59_b793_4bf00b4fcbcb.slice/crio-dd44e2666436100a0b0f01f14f1442dfa0a317ff1a44d9b77a0b32dde7f6a265 WatchSource:0}: Error finding container dd44e2666436100a0b0f01f14f1442dfa0a317ff1a44d9b77a0b32dde7f6a265: Status 404 returned error can't find the container with id dd44e2666436100a0b0f01f14f1442dfa0a317ff1a44d9b77a0b32dde7f6a265 Jan 21 00:18:39 crc kubenswrapper[4873]: I0121 00:18:39.825592 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"a8b18a18-676f-4d59-b793-4bf00b4fcbcb","Type":"ContainerStarted","Data":"eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a"} Jan 21 00:18:39 crc kubenswrapper[4873]: I0121 00:18:39.825831 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"a8b18a18-676f-4d59-b793-4bf00b4fcbcb","Type":"ContainerStarted","Data":"dd44e2666436100a0b0f01f14f1442dfa0a317ff1a44d9b77a0b32dde7f6a265"} Jan 21 00:18:39 crc kubenswrapper[4873]: E0121 00:18:39.875997 4873 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4728576123843522043, SKID=, AKID=50:3B:44:84:12:86:E1:DE:50:74:74:B4:E5:39:92:D5:EB:D4:A6:9D failed: x509: certificate signed by unknown authority" Jan 21 00:18:40 crc kubenswrapper[4873]: I0121 00:18:40.908243 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 21 00:18:41 crc kubenswrapper[4873]: I0121 00:18:41.289988 4873 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 00:18:41 crc kubenswrapper[4873]: I0121 00:18:41.913610 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="a8b18a18-676f-4d59-b793-4bf00b4fcbcb" containerName="git-clone" containerID="cri-o://eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a" gracePeriod=30 Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.312650 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_a8b18a18-676f-4d59-b793-4bf00b4fcbcb/git-clone/0.log" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.313039 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416185 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildcachedir\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416289 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-blob-cache\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416315 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-node-pullsecrets\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416319 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416350 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-proxy-ca-bundles\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416401 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-push\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416441 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-root\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416477 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-run\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416504 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-ca-bundles\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416530 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-pull\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416583 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-system-configs\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416632 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngmgj\" (UniqueName: \"kubernetes.io/projected/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-kube-api-access-ngmgj\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416665 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildworkdir\") pod \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\" (UID: \"a8b18a18-676f-4d59-b793-4bf00b4fcbcb\") " Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416844 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416935 4873 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.416960 4873 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.417077 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.417114 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.417223 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.417339 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.417511 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.417981 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.418108 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.421819 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-push" (OuterVolumeSpecName: "builder-dockercfg-g2sj2-push") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "builder-dockercfg-g2sj2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.422147 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-pull" (OuterVolumeSpecName: "builder-dockercfg-g2sj2-pull") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "builder-dockercfg-g2sj2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.422931 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-kube-api-access-ngmgj" (OuterVolumeSpecName: "kube-api-access-ngmgj") pod "a8b18a18-676f-4d59-b793-4bf00b4fcbcb" (UID: "a8b18a18-676f-4d59-b793-4bf00b4fcbcb"). InnerVolumeSpecName "kube-api-access-ngmgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.517601 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngmgj\" (UniqueName: \"kubernetes.io/projected/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-kube-api-access-ngmgj\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.517633 4873 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.517645 4873 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.517654 4873 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.517663 4873 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-g2sj2-push\" (UniqueName: \"kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-push\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.517672 4873 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.517680 4873 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.517688 4873 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.517697 4873 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-g2sj2-pull\" (UniqueName: \"kubernetes.io/secret/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-builder-dockercfg-g2sj2-pull\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.517705 4873 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a8b18a18-676f-4d59-b793-4bf00b4fcbcb-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.921491 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_a8b18a18-676f-4d59-b793-4bf00b4fcbcb/git-clone/0.log" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.921616 4873 generic.go:334] "Generic (PLEG): container finished" podID="a8b18a18-676f-4d59-b793-4bf00b4fcbcb" containerID="eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a" exitCode=1 Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.921711 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.921715 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"a8b18a18-676f-4d59-b793-4bf00b4fcbcb","Type":"ContainerDied","Data":"eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a"} Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.921796 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"a8b18a18-676f-4d59-b793-4bf00b4fcbcb","Type":"ContainerDied","Data":"dd44e2666436100a0b0f01f14f1442dfa0a317ff1a44d9b77a0b32dde7f6a265"} Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.921829 4873 scope.go:117] "RemoveContainer" containerID="eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.945651 4873 scope.go:117] "RemoveContainer" containerID="eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a" Jan 21 00:18:42 crc kubenswrapper[4873]: E0121 00:18:42.948339 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a\": container with ID starting with eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a not found: ID does not exist" containerID="eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.948419 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a"} err="failed to get container status \"eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a\": rpc error: code = NotFound desc = could not find container \"eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a\": container with ID starting with eb755c84266ef298ccbfa64c6d0006319010372a622543c168c1950d8debad1a not found: ID does not exist" Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.966125 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 21 00:18:42 crc kubenswrapper[4873]: I0121 00:18:42.975847 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 21 00:18:44 crc kubenswrapper[4873]: I0121 00:18:44.086454 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b18a18-676f-4d59-b793-4bf00b4fcbcb" path="/var/lib/kubelet/pods/a8b18a18-676f-4d59-b793-4bf00b4fcbcb/volumes" Jan 21 00:19:01 crc kubenswrapper[4873]: I0121 00:19:01.630126 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:19:01 crc kubenswrapper[4873]: I0121 00:19:01.630989 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:19:01 crc kubenswrapper[4873]: I0121 00:19:01.631097 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:19:01 crc kubenswrapper[4873]: I0121 00:19:01.632083 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8cef0a4d1b8c465b381f96afc08d2aae348faec8f26beb9b75c912c1d64983ba"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:19:01 crc kubenswrapper[4873]: I0121 00:19:01.632197 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://8cef0a4d1b8c465b381f96afc08d2aae348faec8f26beb9b75c912c1d64983ba" gracePeriod=600 Jan 21 00:19:02 crc kubenswrapper[4873]: I0121 00:19:02.074107 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="8cef0a4d1b8c465b381f96afc08d2aae348faec8f26beb9b75c912c1d64983ba" exitCode=0 Jan 21 00:19:02 crc kubenswrapper[4873]: I0121 00:19:02.074171 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"8cef0a4d1b8c465b381f96afc08d2aae348faec8f26beb9b75c912c1d64983ba"} Jan 21 00:19:02 crc kubenswrapper[4873]: I0121 00:19:02.074398 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"b20fde4ecdd2c37321b0d24bf86e0f1f91ba1f01e9a17dbaa9eeb273196cd27d"} Jan 21 00:19:02 crc kubenswrapper[4873]: I0121 00:19:02.074421 4873 scope.go:117] "RemoveContainer" containerID="15c3f1fd3e4e90f15734f5086b38debb378833b2d9619dd1eb676e40cb62a9bb" Jan 21 00:19:19 crc kubenswrapper[4873]: I0121 00:19:19.758367 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sm5qv/must-gather-jsn4t"] Jan 21 00:19:19 crc kubenswrapper[4873]: E0121 00:19:19.759366 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b18a18-676f-4d59-b793-4bf00b4fcbcb" containerName="git-clone" Jan 21 00:19:19 crc kubenswrapper[4873]: I0121 00:19:19.759388 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b18a18-676f-4d59-b793-4bf00b4fcbcb" containerName="git-clone" Jan 21 00:19:19 crc kubenswrapper[4873]: I0121 00:19:19.759620 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b18a18-676f-4d59-b793-4bf00b4fcbcb" containerName="git-clone" Jan 21 00:19:19 crc kubenswrapper[4873]: I0121 00:19:19.760652 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sm5qv/must-gather-jsn4t" Jan 21 00:19:19 crc kubenswrapper[4873]: I0121 00:19:19.763505 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-sm5qv"/"default-dockercfg-pdxql" Jan 21 00:19:19 crc kubenswrapper[4873]: I0121 00:19:19.763917 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sm5qv"/"openshift-service-ca.crt" Jan 21 00:19:19 crc kubenswrapper[4873]: I0121 00:19:19.764472 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sm5qv"/"kube-root-ca.crt" Jan 21 00:19:19 crc kubenswrapper[4873]: I0121 00:19:19.805665 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sm5qv/must-gather-jsn4t"] Jan 21 00:19:19 crc kubenswrapper[4873]: I0121 00:19:19.958181 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t6l5\" (UniqueName: \"kubernetes.io/projected/df4b16fd-8c20-450c-9c30-82bebad6c6d5-kube-api-access-8t6l5\") pod \"must-gather-jsn4t\" (UID: \"df4b16fd-8c20-450c-9c30-82bebad6c6d5\") " pod="openshift-must-gather-sm5qv/must-gather-jsn4t" Jan 21 00:19:19 crc kubenswrapper[4873]: I0121 00:19:19.958260 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df4b16fd-8c20-450c-9c30-82bebad6c6d5-must-gather-output\") pod \"must-gather-jsn4t\" (UID: \"df4b16fd-8c20-450c-9c30-82bebad6c6d5\") " pod="openshift-must-gather-sm5qv/must-gather-jsn4t" Jan 21 00:19:20 crc kubenswrapper[4873]: I0121 00:19:20.059910 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t6l5\" (UniqueName: \"kubernetes.io/projected/df4b16fd-8c20-450c-9c30-82bebad6c6d5-kube-api-access-8t6l5\") pod \"must-gather-jsn4t\" (UID: \"df4b16fd-8c20-450c-9c30-82bebad6c6d5\") " pod="openshift-must-gather-sm5qv/must-gather-jsn4t" Jan 21 00:19:20 crc kubenswrapper[4873]: I0121 00:19:20.059970 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df4b16fd-8c20-450c-9c30-82bebad6c6d5-must-gather-output\") pod \"must-gather-jsn4t\" (UID: \"df4b16fd-8c20-450c-9c30-82bebad6c6d5\") " pod="openshift-must-gather-sm5qv/must-gather-jsn4t" Jan 21 00:19:20 crc kubenswrapper[4873]: I0121 00:19:20.060456 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df4b16fd-8c20-450c-9c30-82bebad6c6d5-must-gather-output\") pod \"must-gather-jsn4t\" (UID: \"df4b16fd-8c20-450c-9c30-82bebad6c6d5\") " pod="openshift-must-gather-sm5qv/must-gather-jsn4t" Jan 21 00:19:20 crc kubenswrapper[4873]: I0121 00:19:20.081533 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t6l5\" (UniqueName: \"kubernetes.io/projected/df4b16fd-8c20-450c-9c30-82bebad6c6d5-kube-api-access-8t6l5\") pod \"must-gather-jsn4t\" (UID: \"df4b16fd-8c20-450c-9c30-82bebad6c6d5\") " pod="openshift-must-gather-sm5qv/must-gather-jsn4t" Jan 21 00:19:20 crc kubenswrapper[4873]: I0121 00:19:20.090514 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sm5qv/must-gather-jsn4t" Jan 21 00:19:20 crc kubenswrapper[4873]: I0121 00:19:20.481126 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sm5qv/must-gather-jsn4t"] Jan 21 00:19:21 crc kubenswrapper[4873]: I0121 00:19:21.203681 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sm5qv/must-gather-jsn4t" event={"ID":"df4b16fd-8c20-450c-9c30-82bebad6c6d5","Type":"ContainerStarted","Data":"a01827241bda7cc10ac222bf861246b1cb3a7a3bb429a62fed27e1b9dc5aba3f"} Jan 21 00:19:30 crc kubenswrapper[4873]: I0121 00:19:30.275046 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sm5qv/must-gather-jsn4t" event={"ID":"df4b16fd-8c20-450c-9c30-82bebad6c6d5","Type":"ContainerStarted","Data":"f5f89b45e066cdf4dd4831b61bc13a4b5e69e3b18536b70f3da9b5eb6d6f581d"} Jan 21 00:19:31 crc kubenswrapper[4873]: I0121 00:19:31.284300 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sm5qv/must-gather-jsn4t" event={"ID":"df4b16fd-8c20-450c-9c30-82bebad6c6d5","Type":"ContainerStarted","Data":"1c37f23d238954aa70953cdde5f7e9a6b7a4424a0cd1b9764e6960ce6e0c0388"} Jan 21 00:19:31 crc kubenswrapper[4873]: I0121 00:19:31.311597 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sm5qv/must-gather-jsn4t" podStartSLOduration=2.761376575 podStartE2EDuration="12.311533995s" podCreationTimestamp="2026-01-21 00:19:19 +0000 UTC" firstStartedPulling="2026-01-21 00:19:20.492142306 +0000 UTC m=+792.732009952" lastFinishedPulling="2026-01-21 00:19:30.042299696 +0000 UTC m=+802.282167372" observedRunningTime="2026-01-21 00:19:31.302029857 +0000 UTC m=+803.541897503" watchObservedRunningTime="2026-01-21 00:19:31.311533995 +0000 UTC m=+803.551401681" Jan 21 00:19:44 crc kubenswrapper[4873]: I0121 00:19:44.715695 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-z8d7g_62c1a236-f69b-401a-9340-c57b301f0657/control-plane-machine-set-operator/0.log" Jan 21 00:19:44 crc kubenswrapper[4873]: I0121 00:19:44.733433 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hk8w_9250698c-3404-4a66-a9b6-286266f0e829/kube-rbac-proxy/0.log" Jan 21 00:19:44 crc kubenswrapper[4873]: I0121 00:19:44.742646 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hk8w_9250698c-3404-4a66-a9b6-286266f0e829/machine-api-operator/0.log" Jan 21 00:19:50 crc kubenswrapper[4873]: I0121 00:19:50.093949 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-chdlx_c72aa9c7-1bed-469a-aac4-9809002c10af/cert-manager-controller/0.log" Jan 21 00:19:50 crc kubenswrapper[4873]: I0121 00:19:50.115503 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-b4nhj_b626d9c0-c895-4804-8bbf-fdc3cc0ffddc/cert-manager-cainjector/0.log" Jan 21 00:19:50 crc kubenswrapper[4873]: I0121 00:19:50.129434 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-prt5l_8d55c464-d289-4042-a198-c80450070784/cert-manager-webhook/0.log" Jan 21 00:19:55 crc kubenswrapper[4873]: I0121 00:19:55.353149 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rtw4s_2e96a2df-caed-45ec-967c-ec7ac2705c55/prometheus-operator/0.log" Jan 21 00:19:55 crc kubenswrapper[4873]: I0121 00:19:55.368420 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t_bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e/prometheus-operator-admission-webhook/0.log" Jan 21 00:19:55 crc kubenswrapper[4873]: I0121 00:19:55.381826 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4_4e79fe55-0691-4477-b520-dd1d567bc5f0/prometheus-operator-admission-webhook/0.log" Jan 21 00:19:55 crc kubenswrapper[4873]: I0121 00:19:55.397432 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9q2cg_35cf15d9-a01a-4ffe-a81a-efd6f7f974ca/operator/0.log" Jan 21 00:19:55 crc kubenswrapper[4873]: I0121 00:19:55.417172 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-t4psx_3d39788c-bdb9-4216-80c3-da758b12627a/perses-operator/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.682316 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f_1cc90934-86a7-4617-930f-422e65f99caf/extract/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.689893 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f_1cc90934-86a7-4617-930f-422e65f99caf/util/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.706389 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar759f_1cc90934-86a7-4617-930f-422e65f99caf/pull/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.715671 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v_75ce7252-19af-4465-afa1-11908cc182b0/extract/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.722893 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v_75ce7252-19af-4465-afa1-11908cc182b0/util/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.731074 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftfl9v_75ce7252-19af-4465-afa1-11908cc182b0/pull/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.743395 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr_17b840c6-394e-4161-a325-b79427f6e4e7/extract/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.750673 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr_17b840c6-394e-4161-a325-b79427f6e4e7/util/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.757511 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebbbhr_17b840c6-394e-4161-a325-b79427f6e4e7/pull/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.768635 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz_3cf327e3-3f0d-4bcc-8246-4b31c21d06be/extract/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.777123 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz_3cf327e3-3f0d-4bcc-8246-4b31c21d06be/util/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.785318 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vbgrz_3cf327e3-3f0d-4bcc-8246-4b31c21d06be/pull/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.980222 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6msps_bbf3181e-73b7-4944-b7d2-e4970cb1b2b5/registry-server/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.984860 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6msps_bbf3181e-73b7-4944-b7d2-e4970cb1b2b5/extract-utilities/0.log" Jan 21 00:20:00 crc kubenswrapper[4873]: I0121 00:20:00.993806 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6msps_bbf3181e-73b7-4944-b7d2-e4970cb1b2b5/extract-content/0.log" Jan 21 00:20:01 crc kubenswrapper[4873]: I0121 00:20:01.226669 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8dtqn_3907c072-f5a0-44fa-9d7c-4a329a37863e/registry-server/0.log" Jan 21 00:20:01 crc kubenswrapper[4873]: I0121 00:20:01.230843 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8dtqn_3907c072-f5a0-44fa-9d7c-4a329a37863e/extract-utilities/0.log" Jan 21 00:20:01 crc kubenswrapper[4873]: I0121 00:20:01.236959 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8dtqn_3907c072-f5a0-44fa-9d7c-4a329a37863e/extract-content/0.log" Jan 21 00:20:01 crc kubenswrapper[4873]: I0121 00:20:01.249160 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hql4n_864ccb8e-f89f-49d9-985a-2d845b3690bf/marketplace-operator/0.log" Jan 21 00:20:01 crc kubenswrapper[4873]: I0121 00:20:01.393813 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-28d2j_3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a/registry-server/0.log" Jan 21 00:20:01 crc kubenswrapper[4873]: I0121 00:20:01.398993 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-28d2j_3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a/extract-utilities/0.log" Jan 21 00:20:01 crc kubenswrapper[4873]: I0121 00:20:01.405913 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-28d2j_3c362d6b-3be3-4cc3-a71f-b9eb64e8e93a/extract-content/0.log" Jan 21 00:20:05 crc kubenswrapper[4873]: I0121 00:20:05.954970 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rtw4s_2e96a2df-caed-45ec-967c-ec7ac2705c55/prometheus-operator/0.log" Jan 21 00:20:05 crc kubenswrapper[4873]: I0121 00:20:05.977435 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t_bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e/prometheus-operator-admission-webhook/0.log" Jan 21 00:20:05 crc kubenswrapper[4873]: I0121 00:20:05.987895 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4_4e79fe55-0691-4477-b520-dd1d567bc5f0/prometheus-operator-admission-webhook/0.log" Jan 21 00:20:06 crc kubenswrapper[4873]: I0121 00:20:06.007698 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9q2cg_35cf15d9-a01a-4ffe-a81a-efd6f7f974ca/operator/0.log" Jan 21 00:20:06 crc kubenswrapper[4873]: I0121 00:20:06.018158 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-t4psx_3d39788c-bdb9-4216-80c3-da758b12627a/perses-operator/0.log" Jan 21 00:20:15 crc kubenswrapper[4873]: I0121 00:20:15.991078 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rtw4s_2e96a2df-caed-45ec-967c-ec7ac2705c55/prometheus-operator/0.log" Jan 21 00:20:16 crc kubenswrapper[4873]: I0121 00:20:16.002930 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-79565fc69b-jxm7t_bba0db0f-0cc5-4c9a-b26a-0a2c29c5130e/prometheus-operator-admission-webhook/0.log" Jan 21 00:20:16 crc kubenswrapper[4873]: I0121 00:20:16.013297 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-79565fc69b-zjcg4_4e79fe55-0691-4477-b520-dd1d567bc5f0/prometheus-operator-admission-webhook/0.log" Jan 21 00:20:16 crc kubenswrapper[4873]: I0121 00:20:16.032909 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9q2cg_35cf15d9-a01a-4ffe-a81a-efd6f7f974ca/operator/0.log" Jan 21 00:20:16 crc kubenswrapper[4873]: I0121 00:20:16.045593 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-t4psx_3d39788c-bdb9-4216-80c3-da758b12627a/perses-operator/0.log" Jan 21 00:20:16 crc kubenswrapper[4873]: I0121 00:20:16.184392 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-chdlx_c72aa9c7-1bed-469a-aac4-9809002c10af/cert-manager-controller/0.log" Jan 21 00:20:16 crc kubenswrapper[4873]: I0121 00:20:16.199904 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-b4nhj_b626d9c0-c895-4804-8bbf-fdc3cc0ffddc/cert-manager-cainjector/0.log" Jan 21 00:20:16 crc kubenswrapper[4873]: I0121 00:20:16.220509 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-prt5l_8d55c464-d289-4042-a198-c80450070784/cert-manager-webhook/0.log" Jan 21 00:20:16 crc kubenswrapper[4873]: I0121 00:20:16.638803 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-chdlx_c72aa9c7-1bed-469a-aac4-9809002c10af/cert-manager-controller/0.log" Jan 21 00:20:16 crc kubenswrapper[4873]: I0121 00:20:16.651744 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-b4nhj_b626d9c0-c895-4804-8bbf-fdc3cc0ffddc/cert-manager-cainjector/0.log" Jan 21 00:20:16 crc kubenswrapper[4873]: I0121 00:20:16.661168 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-prt5l_8d55c464-d289-4042-a198-c80450070784/cert-manager-webhook/0.log" Jan 21 00:20:17 crc kubenswrapper[4873]: I0121 00:20:17.053325 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-z8d7g_62c1a236-f69b-401a-9340-c57b301f0657/control-plane-machine-set-operator/0.log" Jan 21 00:20:17 crc kubenswrapper[4873]: I0121 00:20:17.067218 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hk8w_9250698c-3404-4a66-a9b6-286266f0e829/kube-rbac-proxy/0.log" Jan 21 00:20:17 crc kubenswrapper[4873]: I0121 00:20:17.074202 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hk8w_9250698c-3404-4a66-a9b6-286266f0e829/machine-api-operator/0.log" Jan 21 00:20:17 crc kubenswrapper[4873]: I0121 00:20:17.529668 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-6bc699db7d-tmnp5_8f8b8dbe-76f7-4d11-8b5d-3b067b981141/manager/0.log" Jan 21 00:20:17 crc kubenswrapper[4873]: I0121 00:20:17.556194 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d9a84d57-1412-491b-94d9-28fd35610566/elasticsearch/0.log" Jan 21 00:20:17 crc kubenswrapper[4873]: I0121 00:20:17.563839 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d9a84d57-1412-491b-94d9-28fd35610566/elastic-internal-init-filesystem/0.log" Jan 21 00:20:17 crc kubenswrapper[4873]: I0121 00:20:17.568827 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d9a84d57-1412-491b-94d9-28fd35610566/elastic-internal-suspend/0.log" Jan 21 00:20:17 crc kubenswrapper[4873]: I0121 00:20:17.580334 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-5bb49f789d-6bd24_0fa9b095-3892-4ab4-9b8a-29ee7631bfc7/interconnect-operator/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.552777 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rd4h7_f51bbfee-1a9c-46e8-81aa-e6359268a146/kube-multus-additional-cni-plugins/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.562152 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rd4h7_f51bbfee-1a9c-46e8-81aa-e6359268a146/egress-router-binary-copy/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.568953 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rd4h7_f51bbfee-1a9c-46e8-81aa-e6359268a146/cni-plugins/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.575064 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rd4h7_f51bbfee-1a9c-46e8-81aa-e6359268a146/bond-cni-plugin/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.582657 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rd4h7_f51bbfee-1a9c-46e8-81aa-e6359268a146/routeoverride-cni/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.591170 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rd4h7_f51bbfee-1a9c-46e8-81aa-e6359268a146/whereabouts-cni-bincopy/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.597006 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rd4h7_f51bbfee-1a9c-46e8-81aa-e6359268a146/whereabouts-cni/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.606846 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-mx25s_632798b6-480c-42d9-a549-7b2e8a87b1e2/multus-admission-controller/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.615483 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-mx25s_632798b6-480c-42d9-a549-7b2e8a87b1e2/kube-rbac-proxy/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.635858 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/3.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.667460 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nfrvx_fc2b4503-97f2-44cb-a1ad-e558df352294/kube-multus/2.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.691685 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-mx2js_c7f7e62f-ce78-4588-994f-8ab17d7821d1/network-metrics-daemon/0.log" Jan 21 00:20:18 crc kubenswrapper[4873]: I0121 00:20:18.697115 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-mx2js_c7f7e62f-ce78-4588-994f-8ab17d7821d1/kube-rbac-proxy/0.log" Jan 21 00:21:01 crc kubenswrapper[4873]: I0121 00:21:01.630922 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:21:01 crc kubenswrapper[4873]: I0121 00:21:01.631438 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:21:31 crc kubenswrapper[4873]: I0121 00:21:31.630222 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:21:31 crc kubenswrapper[4873]: I0121 00:21:31.630800 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.647283 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4fxg6"] Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.666160 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.670419 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fxg6"] Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.811165 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-utilities\") pod \"community-operators-4fxg6\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.811208 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt4fz\" (UniqueName: \"kubernetes.io/projected/aadce8af-a543-4934-941b-a02e60855824-kube-api-access-bt4fz\") pod \"community-operators-4fxg6\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.811236 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-catalog-content\") pod \"community-operators-4fxg6\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.912576 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-utilities\") pod \"community-operators-4fxg6\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.912638 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt4fz\" (UniqueName: \"kubernetes.io/projected/aadce8af-a543-4934-941b-a02e60855824-kube-api-access-bt4fz\") pod \"community-operators-4fxg6\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.912669 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-catalog-content\") pod \"community-operators-4fxg6\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.913120 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-utilities\") pod \"community-operators-4fxg6\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.913261 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-catalog-content\") pod \"community-operators-4fxg6\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.931882 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt4fz\" (UniqueName: \"kubernetes.io/projected/aadce8af-a543-4934-941b-a02e60855824-kube-api-access-bt4fz\") pod \"community-operators-4fxg6\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:40 crc kubenswrapper[4873]: I0121 00:21:40.988718 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:41 crc kubenswrapper[4873]: I0121 00:21:41.484906 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fxg6"] Jan 21 00:21:42 crc kubenswrapper[4873]: I0121 00:21:42.379318 4873 generic.go:334] "Generic (PLEG): container finished" podID="aadce8af-a543-4934-941b-a02e60855824" containerID="84d3621328862efd22b16fa56a8410e9d9252a3d4eda4ca3ac99d57948dc38aa" exitCode=0 Jan 21 00:21:42 crc kubenswrapper[4873]: I0121 00:21:42.379381 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxg6" event={"ID":"aadce8af-a543-4934-941b-a02e60855824","Type":"ContainerDied","Data":"84d3621328862efd22b16fa56a8410e9d9252a3d4eda4ca3ac99d57948dc38aa"} Jan 21 00:21:42 crc kubenswrapper[4873]: I0121 00:21:42.379417 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxg6" event={"ID":"aadce8af-a543-4934-941b-a02e60855824","Type":"ContainerStarted","Data":"1df951e26a7e822495c5ded1f8b6d7047b7ac243f15ed90d9751510ee290ea81"} Jan 21 00:21:44 crc kubenswrapper[4873]: I0121 00:21:44.393235 4873 generic.go:334] "Generic (PLEG): container finished" podID="aadce8af-a543-4934-941b-a02e60855824" containerID="f6c730baadad237b1d81a24bfa1dd23c5e95e4fdc2dbf9df5f55b323b0ab60d9" exitCode=0 Jan 21 00:21:44 crc kubenswrapper[4873]: I0121 00:21:44.393352 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxg6" event={"ID":"aadce8af-a543-4934-941b-a02e60855824","Type":"ContainerDied","Data":"f6c730baadad237b1d81a24bfa1dd23c5e95e4fdc2dbf9df5f55b323b0ab60d9"} Jan 21 00:21:45 crc kubenswrapper[4873]: I0121 00:21:45.412802 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxg6" event={"ID":"aadce8af-a543-4934-941b-a02e60855824","Type":"ContainerStarted","Data":"4a3d150f05aadd7d501935062b1a1f11bbbb9af4eca2dec67a675fda112399bc"} Jan 21 00:21:45 crc kubenswrapper[4873]: I0121 00:21:45.439830 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4fxg6" podStartSLOduration=2.976748379 podStartE2EDuration="5.439807163s" podCreationTimestamp="2026-01-21 00:21:40 +0000 UTC" firstStartedPulling="2026-01-21 00:21:42.380853289 +0000 UTC m=+934.620720935" lastFinishedPulling="2026-01-21 00:21:44.843912033 +0000 UTC m=+937.083779719" observedRunningTime="2026-01-21 00:21:45.438018873 +0000 UTC m=+937.677886539" watchObservedRunningTime="2026-01-21 00:21:45.439807163 +0000 UTC m=+937.679674819" Jan 21 00:21:50 crc kubenswrapper[4873]: I0121 00:21:50.988916 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:50 crc kubenswrapper[4873]: I0121 00:21:50.989385 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.056824 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.324012 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mn6vh"] Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.325436 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.333150 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mn6vh"] Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.474464 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-catalog-content\") pod \"redhat-operators-mn6vh\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.474514 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-utilities\") pod \"redhat-operators-mn6vh\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.474760 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rjb\" (UniqueName: \"kubernetes.io/projected/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-kube-api-access-p8rjb\") pod \"redhat-operators-mn6vh\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.496206 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.576743 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rjb\" (UniqueName: \"kubernetes.io/projected/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-kube-api-access-p8rjb\") pod \"redhat-operators-mn6vh\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.577238 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-catalog-content\") pod \"redhat-operators-mn6vh\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.577453 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-utilities\") pod \"redhat-operators-mn6vh\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.577970 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-catalog-content\") pod \"redhat-operators-mn6vh\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.578079 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-utilities\") pod \"redhat-operators-mn6vh\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.598981 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rjb\" (UniqueName: \"kubernetes.io/projected/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-kube-api-access-p8rjb\") pod \"redhat-operators-mn6vh\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.661055 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:21:51 crc kubenswrapper[4873]: I0121 00:21:51.887222 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mn6vh"] Jan 21 00:21:52 crc kubenswrapper[4873]: I0121 00:21:52.458756 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn6vh" event={"ID":"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4","Type":"ContainerStarted","Data":"f7bbf761b96066ba549b2ac17d89ec82718d50830a94bf815f9763ac3e6d132e"} Jan 21 00:21:52 crc kubenswrapper[4873]: I0121 00:21:52.459052 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn6vh" event={"ID":"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4","Type":"ContainerStarted","Data":"f20065873609b1c71f9ceca232a87adcb752ebc01b4a7a0e247d90dd95dfe05b"} Jan 21 00:21:53 crc kubenswrapper[4873]: I0121 00:21:53.466836 4873 generic.go:334] "Generic (PLEG): container finished" podID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerID="f7bbf761b96066ba549b2ac17d89ec82718d50830a94bf815f9763ac3e6d132e" exitCode=0 Jan 21 00:21:53 crc kubenswrapper[4873]: I0121 00:21:53.466891 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn6vh" event={"ID":"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4","Type":"ContainerDied","Data":"f7bbf761b96066ba549b2ac17d89ec82718d50830a94bf815f9763ac3e6d132e"} Jan 21 00:21:53 crc kubenswrapper[4873]: I0121 00:21:53.471657 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:21:53 crc kubenswrapper[4873]: I0121 00:21:53.910002 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fxg6"] Jan 21 00:21:53 crc kubenswrapper[4873]: I0121 00:21:53.910741 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4fxg6" podUID="aadce8af-a543-4934-941b-a02e60855824" containerName="registry-server" containerID="cri-o://4a3d150f05aadd7d501935062b1a1f11bbbb9af4eca2dec67a675fda112399bc" gracePeriod=2 Jan 21 00:21:54 crc kubenswrapper[4873]: I0121 00:21:54.481347 4873 generic.go:334] "Generic (PLEG): container finished" podID="aadce8af-a543-4934-941b-a02e60855824" containerID="4a3d150f05aadd7d501935062b1a1f11bbbb9af4eca2dec67a675fda112399bc" exitCode=0 Jan 21 00:21:54 crc kubenswrapper[4873]: I0121 00:21:54.481454 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxg6" event={"ID":"aadce8af-a543-4934-941b-a02e60855824","Type":"ContainerDied","Data":"4a3d150f05aadd7d501935062b1a1f11bbbb9af4eca2dec67a675fda112399bc"} Jan 21 00:21:54 crc kubenswrapper[4873]: I0121 00:21:54.800596 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:54 crc kubenswrapper[4873]: I0121 00:21:54.920515 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt4fz\" (UniqueName: \"kubernetes.io/projected/aadce8af-a543-4934-941b-a02e60855824-kube-api-access-bt4fz\") pod \"aadce8af-a543-4934-941b-a02e60855824\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " Jan 21 00:21:54 crc kubenswrapper[4873]: I0121 00:21:54.920674 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-catalog-content\") pod \"aadce8af-a543-4934-941b-a02e60855824\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " Jan 21 00:21:54 crc kubenswrapper[4873]: I0121 00:21:54.920732 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-utilities\") pod \"aadce8af-a543-4934-941b-a02e60855824\" (UID: \"aadce8af-a543-4934-941b-a02e60855824\") " Jan 21 00:21:54 crc kubenswrapper[4873]: I0121 00:21:54.921784 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-utilities" (OuterVolumeSpecName: "utilities") pod "aadce8af-a543-4934-941b-a02e60855824" (UID: "aadce8af-a543-4934-941b-a02e60855824"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:21:54 crc kubenswrapper[4873]: I0121 00:21:54.933608 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aadce8af-a543-4934-941b-a02e60855824-kube-api-access-bt4fz" (OuterVolumeSpecName: "kube-api-access-bt4fz") pod "aadce8af-a543-4934-941b-a02e60855824" (UID: "aadce8af-a543-4934-941b-a02e60855824"). InnerVolumeSpecName "kube-api-access-bt4fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:21:54 crc kubenswrapper[4873]: I0121 00:21:54.986851 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aadce8af-a543-4934-941b-a02e60855824" (UID: "aadce8af-a543-4934-941b-a02e60855824"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.022236 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt4fz\" (UniqueName: \"kubernetes.io/projected/aadce8af-a543-4934-941b-a02e60855824-kube-api-access-bt4fz\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.022290 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.022308 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aadce8af-a543-4934-941b-a02e60855824-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.489266 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn6vh" event={"ID":"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4","Type":"ContainerStarted","Data":"a1f1b969cbd3aba886f891bcce232a1495b5041d32963134e17259e53bbc2579"} Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.491809 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fxg6" Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.500788 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxg6" event={"ID":"aadce8af-a543-4934-941b-a02e60855824","Type":"ContainerDied","Data":"1df951e26a7e822495c5ded1f8b6d7047b7ac243f15ed90d9751510ee290ea81"} Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.500870 4873 scope.go:117] "RemoveContainer" containerID="4a3d150f05aadd7d501935062b1a1f11bbbb9af4eca2dec67a675fda112399bc" Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.517196 4873 scope.go:117] "RemoveContainer" containerID="f6c730baadad237b1d81a24bfa1dd23c5e95e4fdc2dbf9df5f55b323b0ab60d9" Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.542662 4873 scope.go:117] "RemoveContainer" containerID="84d3621328862efd22b16fa56a8410e9d9252a3d4eda4ca3ac99d57948dc38aa" Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.564855 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fxg6"] Jan 21 00:21:55 crc kubenswrapper[4873]: I0121 00:21:55.570229 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4fxg6"] Jan 21 00:21:56 crc kubenswrapper[4873]: I0121 00:21:56.073389 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aadce8af-a543-4934-941b-a02e60855824" path="/var/lib/kubelet/pods/aadce8af-a543-4934-941b-a02e60855824/volumes" Jan 21 00:21:56 crc kubenswrapper[4873]: I0121 00:21:56.506391 4873 generic.go:334] "Generic (PLEG): container finished" podID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerID="a1f1b969cbd3aba886f891bcce232a1495b5041d32963134e17259e53bbc2579" exitCode=0 Jan 21 00:21:56 crc kubenswrapper[4873]: I0121 00:21:56.506860 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn6vh" event={"ID":"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4","Type":"ContainerDied","Data":"a1f1b969cbd3aba886f891bcce232a1495b5041d32963134e17259e53bbc2579"} Jan 21 00:21:58 crc kubenswrapper[4873]: I0121 00:21:58.528123 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn6vh" event={"ID":"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4","Type":"ContainerStarted","Data":"5f71fa70a42a3eba6309e60997554bb31870bedab74124a69d372d42c8072df7"} Jan 21 00:21:58 crc kubenswrapper[4873]: I0121 00:21:58.545283 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mn6vh" podStartSLOduration=4.039857989 podStartE2EDuration="7.545267364s" podCreationTimestamp="2026-01-21 00:21:51 +0000 UTC" firstStartedPulling="2026-01-21 00:21:53.471246765 +0000 UTC m=+945.711114411" lastFinishedPulling="2026-01-21 00:21:56.9766561 +0000 UTC m=+949.216523786" observedRunningTime="2026-01-21 00:21:58.545038008 +0000 UTC m=+950.784905664" watchObservedRunningTime="2026-01-21 00:21:58.545267364 +0000 UTC m=+950.785135020" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.306153 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wtcxv"] Jan 21 00:22:00 crc kubenswrapper[4873]: E0121 00:22:00.307995 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aadce8af-a543-4934-941b-a02e60855824" containerName="registry-server" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.308085 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="aadce8af-a543-4934-941b-a02e60855824" containerName="registry-server" Jan 21 00:22:00 crc kubenswrapper[4873]: E0121 00:22:00.308178 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aadce8af-a543-4934-941b-a02e60855824" containerName="extract-utilities" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.308247 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="aadce8af-a543-4934-941b-a02e60855824" containerName="extract-utilities" Jan 21 00:22:00 crc kubenswrapper[4873]: E0121 00:22:00.308314 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aadce8af-a543-4934-941b-a02e60855824" containerName="extract-content" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.308375 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="aadce8af-a543-4934-941b-a02e60855824" containerName="extract-content" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.308587 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="aadce8af-a543-4934-941b-a02e60855824" containerName="registry-server" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.309627 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.326905 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wtcxv"] Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.405022 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9gg\" (UniqueName: \"kubernetes.io/projected/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-kube-api-access-gq9gg\") pod \"certified-operators-wtcxv\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.405166 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-catalog-content\") pod \"certified-operators-wtcxv\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.405198 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-utilities\") pod \"certified-operators-wtcxv\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.506794 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-utilities\") pod \"certified-operators-wtcxv\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.506854 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq9gg\" (UniqueName: \"kubernetes.io/projected/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-kube-api-access-gq9gg\") pod \"certified-operators-wtcxv\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.506931 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-catalog-content\") pod \"certified-operators-wtcxv\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.507355 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-catalog-content\") pod \"certified-operators-wtcxv\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.507604 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-utilities\") pod \"certified-operators-wtcxv\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.536650 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq9gg\" (UniqueName: \"kubernetes.io/projected/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-kube-api-access-gq9gg\") pod \"certified-operators-wtcxv\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:00 crc kubenswrapper[4873]: I0121 00:22:00.627400 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:01 crc kubenswrapper[4873]: I0121 00:22:01.253536 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wtcxv"] Jan 21 00:22:01 crc kubenswrapper[4873]: W0121 00:22:01.259562 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4e2cb72_0dbf_4fc4_8008_fd18be5a62fc.slice/crio-8aaf7bf5c4049d2d55f068d55d29037b1237e8e951a425153fd5abc72289b411 WatchSource:0}: Error finding container 8aaf7bf5c4049d2d55f068d55d29037b1237e8e951a425153fd5abc72289b411: Status 404 returned error can't find the container with id 8aaf7bf5c4049d2d55f068d55d29037b1237e8e951a425153fd5abc72289b411 Jan 21 00:22:01 crc kubenswrapper[4873]: I0121 00:22:01.561145 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcxv" event={"ID":"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc","Type":"ContainerStarted","Data":"8aaf7bf5c4049d2d55f068d55d29037b1237e8e951a425153fd5abc72289b411"} Jan 21 00:22:01 crc kubenswrapper[4873]: I0121 00:22:01.657946 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:22:01 crc kubenswrapper[4873]: I0121 00:22:01.658409 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:22:01 crc kubenswrapper[4873]: I0121 00:22:01.658466 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:22:01 crc kubenswrapper[4873]: I0121 00:22:01.659308 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b20fde4ecdd2c37321b0d24bf86e0f1f91ba1f01e9a17dbaa9eeb273196cd27d"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:22:01 crc kubenswrapper[4873]: I0121 00:22:01.659369 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://b20fde4ecdd2c37321b0d24bf86e0f1f91ba1f01e9a17dbaa9eeb273196cd27d" gracePeriod=600 Jan 21 00:22:01 crc kubenswrapper[4873]: I0121 00:22:01.663227 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:22:01 crc kubenswrapper[4873]: I0121 00:22:01.663720 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:22:02 crc kubenswrapper[4873]: I0121 00:22:02.568413 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerID="f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653" exitCode=0 Jan 21 00:22:02 crc kubenswrapper[4873]: I0121 00:22:02.569451 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcxv" event={"ID":"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc","Type":"ContainerDied","Data":"f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653"} Jan 21 00:22:02 crc kubenswrapper[4873]: I0121 00:22:02.702347 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mn6vh" podUID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerName="registry-server" probeResult="failure" output=< Jan 21 00:22:02 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Jan 21 00:22:02 crc kubenswrapper[4873]: > Jan 21 00:22:03 crc kubenswrapper[4873]: I0121 00:22:03.576939 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcxv" event={"ID":"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc","Type":"ContainerStarted","Data":"89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860"} Jan 21 00:22:03 crc kubenswrapper[4873]: I0121 00:22:03.580415 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="b20fde4ecdd2c37321b0d24bf86e0f1f91ba1f01e9a17dbaa9eeb273196cd27d" exitCode=0 Jan 21 00:22:03 crc kubenswrapper[4873]: I0121 00:22:03.580448 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"b20fde4ecdd2c37321b0d24bf86e0f1f91ba1f01e9a17dbaa9eeb273196cd27d"} Jan 21 00:22:03 crc kubenswrapper[4873]: I0121 00:22:03.580494 4873 scope.go:117] "RemoveContainer" containerID="8cef0a4d1b8c465b381f96afc08d2aae348faec8f26beb9b75c912c1d64983ba" Jan 21 00:22:04 crc kubenswrapper[4873]: I0121 00:22:04.587854 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerID="89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860" exitCode=0 Jan 21 00:22:04 crc kubenswrapper[4873]: I0121 00:22:04.587894 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcxv" event={"ID":"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc","Type":"ContainerDied","Data":"89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860"} Jan 21 00:22:04 crc kubenswrapper[4873]: I0121 00:22:04.591223 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"a7e09866fbd9fe99c03a35d8f4332873e868700b1a97d1b2499a57597d49a95f"} Jan 21 00:22:05 crc kubenswrapper[4873]: I0121 00:22:05.603300 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcxv" event={"ID":"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc","Type":"ContainerStarted","Data":"15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d"} Jan 21 00:22:05 crc kubenswrapper[4873]: I0121 00:22:05.624821 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wtcxv" podStartSLOduration=3.012295477 podStartE2EDuration="5.624786023s" podCreationTimestamp="2026-01-21 00:22:00 +0000 UTC" firstStartedPulling="2026-01-21 00:22:02.570149066 +0000 UTC m=+954.810016712" lastFinishedPulling="2026-01-21 00:22:05.182639592 +0000 UTC m=+957.422507258" observedRunningTime="2026-01-21 00:22:05.623181359 +0000 UTC m=+957.863049015" watchObservedRunningTime="2026-01-21 00:22:05.624786023 +0000 UTC m=+957.864653679" Jan 21 00:22:10 crc kubenswrapper[4873]: I0121 00:22:10.628189 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:10 crc kubenswrapper[4873]: I0121 00:22:10.628962 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:10 crc kubenswrapper[4873]: I0121 00:22:10.738964 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:11 crc kubenswrapper[4873]: I0121 00:22:11.716997 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:11 crc kubenswrapper[4873]: I0121 00:22:11.733643 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:22:11 crc kubenswrapper[4873]: I0121 00:22:11.800175 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:22:11 crc kubenswrapper[4873]: I0121 00:22:11.848614 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wtcxv"] Jan 21 00:22:13 crc kubenswrapper[4873]: I0121 00:22:13.658196 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wtcxv" podUID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerName="registry-server" containerID="cri-o://15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d" gracePeriod=2 Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.028489 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.051330 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mn6vh"] Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.051587 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mn6vh" podUID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerName="registry-server" containerID="cri-o://5f71fa70a42a3eba6309e60997554bb31870bedab74124a69d372d42c8072df7" gracePeriod=2 Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.132004 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-utilities\") pod \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.132081 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq9gg\" (UniqueName: \"kubernetes.io/projected/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-kube-api-access-gq9gg\") pod \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.132154 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-catalog-content\") pod \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\" (UID: \"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc\") " Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.133594 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-utilities" (OuterVolumeSpecName: "utilities") pod "f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" (UID: "f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.146771 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-kube-api-access-gq9gg" (OuterVolumeSpecName: "kube-api-access-gq9gg") pod "f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" (UID: "f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc"). InnerVolumeSpecName "kube-api-access-gq9gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.233207 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq9gg\" (UniqueName: \"kubernetes.io/projected/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-kube-api-access-gq9gg\") on node \"crc\" DevicePath \"\"" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.233246 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.349322 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" (UID: "f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.437035 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.668865 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerID="15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d" exitCode=0 Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.668961 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcxv" event={"ID":"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc","Type":"ContainerDied","Data":"15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d"} Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.668999 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcxv" event={"ID":"f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc","Type":"ContainerDied","Data":"8aaf7bf5c4049d2d55f068d55d29037b1237e8e951a425153fd5abc72289b411"} Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.669027 4873 scope.go:117] "RemoveContainer" containerID="15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.670776 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtcxv" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.675337 4873 generic.go:334] "Generic (PLEG): container finished" podID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerID="5f71fa70a42a3eba6309e60997554bb31870bedab74124a69d372d42c8072df7" exitCode=0 Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.675392 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn6vh" event={"ID":"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4","Type":"ContainerDied","Data":"5f71fa70a42a3eba6309e60997554bb31870bedab74124a69d372d42c8072df7"} Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.694975 4873 scope.go:117] "RemoveContainer" containerID="89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.713969 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wtcxv"] Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.719441 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wtcxv"] Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.737484 4873 scope.go:117] "RemoveContainer" containerID="f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.756240 4873 scope.go:117] "RemoveContainer" containerID="15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d" Jan 21 00:22:14 crc kubenswrapper[4873]: E0121 00:22:14.756800 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d\": container with ID starting with 15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d not found: ID does not exist" containerID="15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.756893 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d"} err="failed to get container status \"15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d\": rpc error: code = NotFound desc = could not find container \"15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d\": container with ID starting with 15cce7b2ed83de75d3cd9607e2ba37573a49fa4a6f383645b3f4a0529ff7569d not found: ID does not exist" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.756992 4873 scope.go:117] "RemoveContainer" containerID="89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860" Jan 21 00:22:14 crc kubenswrapper[4873]: E0121 00:22:14.757297 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860\": container with ID starting with 89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860 not found: ID does not exist" containerID="89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.757364 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860"} err="failed to get container status \"89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860\": rpc error: code = NotFound desc = could not find container \"89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860\": container with ID starting with 89c3f71362de5ab9478fc7b6534079e782d0dbe6cd2ca185978367cd09aac860 not found: ID does not exist" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.757398 4873 scope.go:117] "RemoveContainer" containerID="f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653" Jan 21 00:22:14 crc kubenswrapper[4873]: E0121 00:22:14.757740 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653\": container with ID starting with f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653 not found: ID does not exist" containerID="f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.757771 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653"} err="failed to get container status \"f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653\": rpc error: code = NotFound desc = could not find container \"f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653\": container with ID starting with f629bb46f34e5744b2daddfafee1f3d504436c20155c4de84dd5d86876340653 not found: ID does not exist" Jan 21 00:22:14 crc kubenswrapper[4873]: I0121 00:22:14.935028 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.059034 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8rjb\" (UniqueName: \"kubernetes.io/projected/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-kube-api-access-p8rjb\") pod \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.060537 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-utilities\") pod \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.060803 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-catalog-content\") pod \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\" (UID: \"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4\") " Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.061701 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-utilities" (OuterVolumeSpecName: "utilities") pod "5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" (UID: "5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.066249 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-kube-api-access-p8rjb" (OuterVolumeSpecName: "kube-api-access-p8rjb") pod "5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" (UID: "5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4"). InnerVolumeSpecName "kube-api-access-p8rjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.162267 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8rjb\" (UniqueName: \"kubernetes.io/projected/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-kube-api-access-p8rjb\") on node \"crc\" DevicePath \"\"" Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.162313 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.224039 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" (UID: "5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.263922 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.685313 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn6vh" event={"ID":"5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4","Type":"ContainerDied","Data":"f20065873609b1c71f9ceca232a87adcb752ebc01b4a7a0e247d90dd95dfe05b"} Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.685402 4873 scope.go:117] "RemoveContainer" containerID="5f71fa70a42a3eba6309e60997554bb31870bedab74124a69d372d42c8072df7" Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.685630 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn6vh" Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.712300 4873 scope.go:117] "RemoveContainer" containerID="a1f1b969cbd3aba886f891bcce232a1495b5041d32963134e17259e53bbc2579" Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.724829 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mn6vh"] Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.728142 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mn6vh"] Jan 21 00:22:15 crc kubenswrapper[4873]: I0121 00:22:15.745451 4873 scope.go:117] "RemoveContainer" containerID="f7bbf761b96066ba549b2ac17d89ec82718d50830a94bf815f9763ac3e6d132e" Jan 21 00:22:16 crc kubenswrapper[4873]: I0121 00:22:16.077857 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" path="/var/lib/kubelet/pods/5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4/volumes" Jan 21 00:22:16 crc kubenswrapper[4873]: I0121 00:22:16.080480 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" path="/var/lib/kubelet/pods/f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc/volumes" Jan 21 00:24:31 crc kubenswrapper[4873]: I0121 00:24:31.630881 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:24:31 crc kubenswrapper[4873]: I0121 00:24:31.631611 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:25:01 crc kubenswrapper[4873]: I0121 00:25:01.630026 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:25:01 crc kubenswrapper[4873]: I0121 00:25:01.630809 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:25:31 crc kubenswrapper[4873]: I0121 00:25:31.630714 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:25:31 crc kubenswrapper[4873]: I0121 00:25:31.631335 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:25:31 crc kubenswrapper[4873]: I0121 00:25:31.631409 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:25:31 crc kubenswrapper[4873]: I0121 00:25:31.632544 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7e09866fbd9fe99c03a35d8f4332873e868700b1a97d1b2499a57597d49a95f"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:25:31 crc kubenswrapper[4873]: I0121 00:25:31.632699 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://a7e09866fbd9fe99c03a35d8f4332873e868700b1a97d1b2499a57597d49a95f" gracePeriod=600 Jan 21 00:25:32 crc kubenswrapper[4873]: I0121 00:25:32.095938 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"a7e09866fbd9fe99c03a35d8f4332873e868700b1a97d1b2499a57597d49a95f"} Jan 21 00:25:32 crc kubenswrapper[4873]: I0121 00:25:32.096118 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="a7e09866fbd9fe99c03a35d8f4332873e868700b1a97d1b2499a57597d49a95f" exitCode=0 Jan 21 00:25:32 crc kubenswrapper[4873]: I0121 00:25:32.096285 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"1fb4dcae8ea4da591a8f9d1cef21ca898073401dd9366e589ce108ed991215b0"} Jan 21 00:25:32 crc kubenswrapper[4873]: I0121 00:25:32.096237 4873 scope.go:117] "RemoveContainer" containerID="b20fde4ecdd2c37321b0d24bf86e0f1f91ba1f01e9a17dbaa9eeb273196cd27d" Jan 21 00:27:31 crc kubenswrapper[4873]: I0121 00:27:31.630792 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:27:31 crc kubenswrapper[4873]: I0121 00:27:31.631394 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:28:01 crc kubenswrapper[4873]: I0121 00:28:01.629976 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:28:01 crc kubenswrapper[4873]: I0121 00:28:01.630613 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:28:31 crc kubenswrapper[4873]: I0121 00:28:31.630815 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:28:31 crc kubenswrapper[4873]: I0121 00:28:31.631491 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:28:31 crc kubenswrapper[4873]: I0121 00:28:31.632033 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:28:31 crc kubenswrapper[4873]: I0121 00:28:31.633029 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1fb4dcae8ea4da591a8f9d1cef21ca898073401dd9366e589ce108ed991215b0"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:28:31 crc kubenswrapper[4873]: I0121 00:28:31.633161 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://1fb4dcae8ea4da591a8f9d1cef21ca898073401dd9366e589ce108ed991215b0" gracePeriod=600 Jan 21 00:28:32 crc kubenswrapper[4873]: I0121 00:28:32.466311 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="1fb4dcae8ea4da591a8f9d1cef21ca898073401dd9366e589ce108ed991215b0" exitCode=0 Jan 21 00:28:32 crc kubenswrapper[4873]: I0121 00:28:32.466355 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"1fb4dcae8ea4da591a8f9d1cef21ca898073401dd9366e589ce108ed991215b0"} Jan 21 00:28:32 crc kubenswrapper[4873]: I0121 00:28:32.466379 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86"} Jan 21 00:28:32 crc kubenswrapper[4873]: I0121 00:28:32.466394 4873 scope.go:117] "RemoveContainer" containerID="a7e09866fbd9fe99c03a35d8f4332873e868700b1a97d1b2499a57597d49a95f" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.145869 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx"] Jan 21 00:30:00 crc kubenswrapper[4873]: E0121 00:30:00.146719 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerName="extract-utilities" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.146739 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerName="extract-utilities" Jan 21 00:30:00 crc kubenswrapper[4873]: E0121 00:30:00.146749 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerName="registry-server" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.146756 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerName="registry-server" Jan 21 00:30:00 crc kubenswrapper[4873]: E0121 00:30:00.146769 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerName="extract-content" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.146777 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerName="extract-content" Jan 21 00:30:00 crc kubenswrapper[4873]: E0121 00:30:00.146795 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerName="registry-server" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.146801 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerName="registry-server" Jan 21 00:30:00 crc kubenswrapper[4873]: E0121 00:30:00.146813 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerName="extract-content" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.146820 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerName="extract-content" Jan 21 00:30:00 crc kubenswrapper[4873]: E0121 00:30:00.146834 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerName="extract-utilities" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.146840 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerName="extract-utilities" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.146965 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="5561b9fb-d9d0-4d7f-82b1-2f540e7c67d4" containerName="registry-server" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.146986 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e2cb72-0dbf-4fc4-8008-fd18be5a62fc" containerName="registry-server" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.147502 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.150524 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.154833 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx"] Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.155006 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.269305 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vcbt\" (UniqueName: \"kubernetes.io/projected/2c0460ea-1794-4687-a0a7-a176f24ab374-kube-api-access-4vcbt\") pod \"collect-profiles-29482590-6c5gx\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.269436 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c0460ea-1794-4687-a0a7-a176f24ab374-config-volume\") pod \"collect-profiles-29482590-6c5gx\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.269478 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c0460ea-1794-4687-a0a7-a176f24ab374-secret-volume\") pod \"collect-profiles-29482590-6c5gx\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.370746 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c0460ea-1794-4687-a0a7-a176f24ab374-config-volume\") pod \"collect-profiles-29482590-6c5gx\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.370805 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c0460ea-1794-4687-a0a7-a176f24ab374-secret-volume\") pod \"collect-profiles-29482590-6c5gx\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.370839 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vcbt\" (UniqueName: \"kubernetes.io/projected/2c0460ea-1794-4687-a0a7-a176f24ab374-kube-api-access-4vcbt\") pod \"collect-profiles-29482590-6c5gx\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.371619 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c0460ea-1794-4687-a0a7-a176f24ab374-config-volume\") pod \"collect-profiles-29482590-6c5gx\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.376345 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c0460ea-1794-4687-a0a7-a176f24ab374-secret-volume\") pod \"collect-profiles-29482590-6c5gx\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.389224 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vcbt\" (UniqueName: \"kubernetes.io/projected/2c0460ea-1794-4687-a0a7-a176f24ab374-kube-api-access-4vcbt\") pod \"collect-profiles-29482590-6c5gx\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.463650 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:00 crc kubenswrapper[4873]: I0121 00:30:00.914834 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx"] Jan 21 00:30:01 crc kubenswrapper[4873]: I0121 00:30:01.151012 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" event={"ID":"2c0460ea-1794-4687-a0a7-a176f24ab374","Type":"ContainerStarted","Data":"ffcded5b6a0e1afc250207611cee5653908d29f09c05d6d6fb0170673aa3f932"} Jan 21 00:30:01 crc kubenswrapper[4873]: I0121 00:30:01.151059 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" event={"ID":"2c0460ea-1794-4687-a0a7-a176f24ab374","Type":"ContainerStarted","Data":"a83a96e81ee6455b6ebf9103feac4549b6df25476d01dde9de71b41d36f1f6c5"} Jan 21 00:30:01 crc kubenswrapper[4873]: I0121 00:30:01.164277 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" podStartSLOduration=1.164257627 podStartE2EDuration="1.164257627s" podCreationTimestamp="2026-01-21 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:30:01.163940229 +0000 UTC m=+1433.403807865" watchObservedRunningTime="2026-01-21 00:30:01.164257627 +0000 UTC m=+1433.404125273" Jan 21 00:30:02 crc kubenswrapper[4873]: I0121 00:30:02.157216 4873 generic.go:334] "Generic (PLEG): container finished" podID="2c0460ea-1794-4687-a0a7-a176f24ab374" containerID="ffcded5b6a0e1afc250207611cee5653908d29f09c05d6d6fb0170673aa3f932" exitCode=0 Jan 21 00:30:02 crc kubenswrapper[4873]: I0121 00:30:02.157534 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" event={"ID":"2c0460ea-1794-4687-a0a7-a176f24ab374","Type":"ContainerDied","Data":"ffcded5b6a0e1afc250207611cee5653908d29f09c05d6d6fb0170673aa3f932"} Jan 21 00:30:03 crc kubenswrapper[4873]: I0121 00:30:03.446098 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:03 crc kubenswrapper[4873]: I0121 00:30:03.622735 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vcbt\" (UniqueName: \"kubernetes.io/projected/2c0460ea-1794-4687-a0a7-a176f24ab374-kube-api-access-4vcbt\") pod \"2c0460ea-1794-4687-a0a7-a176f24ab374\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " Jan 21 00:30:03 crc kubenswrapper[4873]: I0121 00:30:03.623424 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c0460ea-1794-4687-a0a7-a176f24ab374-secret-volume\") pod \"2c0460ea-1794-4687-a0a7-a176f24ab374\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " Jan 21 00:30:03 crc kubenswrapper[4873]: I0121 00:30:03.623665 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c0460ea-1794-4687-a0a7-a176f24ab374-config-volume\") pod \"2c0460ea-1794-4687-a0a7-a176f24ab374\" (UID: \"2c0460ea-1794-4687-a0a7-a176f24ab374\") " Jan 21 00:30:03 crc kubenswrapper[4873]: I0121 00:30:03.624939 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0460ea-1794-4687-a0a7-a176f24ab374-config-volume" (OuterVolumeSpecName: "config-volume") pod "2c0460ea-1794-4687-a0a7-a176f24ab374" (UID: "2c0460ea-1794-4687-a0a7-a176f24ab374"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:30:03 crc kubenswrapper[4873]: I0121 00:30:03.629279 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0460ea-1794-4687-a0a7-a176f24ab374-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2c0460ea-1794-4687-a0a7-a176f24ab374" (UID: "2c0460ea-1794-4687-a0a7-a176f24ab374"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:30:03 crc kubenswrapper[4873]: I0121 00:30:03.630308 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0460ea-1794-4687-a0a7-a176f24ab374-kube-api-access-4vcbt" (OuterVolumeSpecName: "kube-api-access-4vcbt") pod "2c0460ea-1794-4687-a0a7-a176f24ab374" (UID: "2c0460ea-1794-4687-a0a7-a176f24ab374"). InnerVolumeSpecName "kube-api-access-4vcbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:30:03 crc kubenswrapper[4873]: I0121 00:30:03.725817 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c0460ea-1794-4687-a0a7-a176f24ab374-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:30:03 crc kubenswrapper[4873]: I0121 00:30:03.725852 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c0460ea-1794-4687-a0a7-a176f24ab374-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:30:03 crc kubenswrapper[4873]: I0121 00:30:03.725862 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vcbt\" (UniqueName: \"kubernetes.io/projected/2c0460ea-1794-4687-a0a7-a176f24ab374-kube-api-access-4vcbt\") on node \"crc\" DevicePath \"\"" Jan 21 00:30:04 crc kubenswrapper[4873]: I0121 00:30:04.172404 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" event={"ID":"2c0460ea-1794-4687-a0a7-a176f24ab374","Type":"ContainerDied","Data":"a83a96e81ee6455b6ebf9103feac4549b6df25476d01dde9de71b41d36f1f6c5"} Jan 21 00:30:04 crc kubenswrapper[4873]: I0121 00:30:04.172461 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a83a96e81ee6455b6ebf9103feac4549b6df25476d01dde9de71b41d36f1f6c5" Jan 21 00:30:04 crc kubenswrapper[4873]: I0121 00:30:04.172863 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-6c5gx" Jan 21 00:30:31 crc kubenswrapper[4873]: I0121 00:30:31.630447 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:30:31 crc kubenswrapper[4873]: I0121 00:30:31.633360 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:31:01 crc kubenswrapper[4873]: I0121 00:31:01.630303 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:31:01 crc kubenswrapper[4873]: I0121 00:31:01.630919 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:31:31 crc kubenswrapper[4873]: I0121 00:31:31.629876 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:31:31 crc kubenswrapper[4873]: I0121 00:31:31.630317 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:31:31 crc kubenswrapper[4873]: I0121 00:31:31.630363 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:31:31 crc kubenswrapper[4873]: I0121 00:31:31.630903 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:31:31 crc kubenswrapper[4873]: I0121 00:31:31.630953 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" gracePeriod=600 Jan 21 00:31:31 crc kubenswrapper[4873]: E0121 00:31:31.768581 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:31:31 crc kubenswrapper[4873]: I0121 00:31:31.814649 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" exitCode=0 Jan 21 00:31:31 crc kubenswrapper[4873]: I0121 00:31:31.814693 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86"} Jan 21 00:31:31 crc kubenswrapper[4873]: I0121 00:31:31.815001 4873 scope.go:117] "RemoveContainer" containerID="1fb4dcae8ea4da591a8f9d1cef21ca898073401dd9366e589ce108ed991215b0" Jan 21 00:31:31 crc kubenswrapper[4873]: I0121 00:31:31.815482 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:31:31 crc kubenswrapper[4873]: E0121 00:31:31.816184 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:31:47 crc kubenswrapper[4873]: I0121 00:31:47.063925 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:31:47 crc kubenswrapper[4873]: E0121 00:31:47.064898 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.079778 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:31:58 crc kubenswrapper[4873]: E0121 00:31:58.080771 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.533806 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6nq8t"] Jan 21 00:31:58 crc kubenswrapper[4873]: E0121 00:31:58.534113 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0460ea-1794-4687-a0a7-a176f24ab374" containerName="collect-profiles" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.534138 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0460ea-1794-4687-a0a7-a176f24ab374" containerName="collect-profiles" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.534318 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0460ea-1794-4687-a0a7-a176f24ab374" containerName="collect-profiles" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.537200 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.541703 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6nq8t"] Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.691764 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-catalog-content\") pod \"community-operators-6nq8t\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.691809 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjm88\" (UniqueName: \"kubernetes.io/projected/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-kube-api-access-rjm88\") pod \"community-operators-6nq8t\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.691861 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-utilities\") pod \"community-operators-6nq8t\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.792983 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-utilities\") pod \"community-operators-6nq8t\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.793093 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-catalog-content\") pod \"community-operators-6nq8t\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.793136 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjm88\" (UniqueName: \"kubernetes.io/projected/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-kube-api-access-rjm88\") pod \"community-operators-6nq8t\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.793817 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-utilities\") pod \"community-operators-6nq8t\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.793846 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-catalog-content\") pod \"community-operators-6nq8t\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.813672 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjm88\" (UniqueName: \"kubernetes.io/projected/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-kube-api-access-rjm88\") pod \"community-operators-6nq8t\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:58 crc kubenswrapper[4873]: I0121 00:31:58.856408 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:31:59 crc kubenswrapper[4873]: I0121 00:31:59.117766 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6nq8t"] Jan 21 00:32:00 crc kubenswrapper[4873]: I0121 00:32:00.025951 4873 generic.go:334] "Generic (PLEG): container finished" podID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerID="b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2" exitCode=0 Jan 21 00:32:00 crc kubenswrapper[4873]: I0121 00:32:00.026120 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8t" event={"ID":"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a","Type":"ContainerDied","Data":"b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2"} Jan 21 00:32:00 crc kubenswrapper[4873]: I0121 00:32:00.026407 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8t" event={"ID":"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a","Type":"ContainerStarted","Data":"bfc61ea1eda46ecad8fbffa78ecc43c55f99c03169e99b5c524ca28f10cc8f1c"} Jan 21 00:32:00 crc kubenswrapper[4873]: I0121 00:32:00.028093 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:32:02 crc kubenswrapper[4873]: I0121 00:32:02.043931 4873 generic.go:334] "Generic (PLEG): container finished" podID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerID="6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0" exitCode=0 Jan 21 00:32:02 crc kubenswrapper[4873]: I0121 00:32:02.044028 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8t" event={"ID":"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a","Type":"ContainerDied","Data":"6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0"} Jan 21 00:32:03 crc kubenswrapper[4873]: I0121 00:32:03.052871 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8t" event={"ID":"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a","Type":"ContainerStarted","Data":"ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45"} Jan 21 00:32:03 crc kubenswrapper[4873]: I0121 00:32:03.083724 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6nq8t" podStartSLOduration=2.6029555970000002 podStartE2EDuration="5.083695802s" podCreationTimestamp="2026-01-21 00:31:58 +0000 UTC" firstStartedPulling="2026-01-21 00:32:00.027488009 +0000 UTC m=+1552.267355695" lastFinishedPulling="2026-01-21 00:32:02.508228254 +0000 UTC m=+1554.748095900" observedRunningTime="2026-01-21 00:32:03.071341957 +0000 UTC m=+1555.311209643" watchObservedRunningTime="2026-01-21 00:32:03.083695802 +0000 UTC m=+1555.323563478" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.250938 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m24zv"] Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.253719 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.266300 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m24zv"] Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.403601 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-catalog-content\") pod \"certified-operators-m24zv\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.403863 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpsdh\" (UniqueName: \"kubernetes.io/projected/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-kube-api-access-wpsdh\") pod \"certified-operators-m24zv\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.404017 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-utilities\") pod \"certified-operators-m24zv\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.505362 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-catalog-content\") pod \"certified-operators-m24zv\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.505408 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpsdh\" (UniqueName: \"kubernetes.io/projected/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-kube-api-access-wpsdh\") pod \"certified-operators-m24zv\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.505445 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-utilities\") pod \"certified-operators-m24zv\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.505980 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-catalog-content\") pod \"certified-operators-m24zv\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.506012 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-utilities\") pod \"certified-operators-m24zv\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.530296 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpsdh\" (UniqueName: \"kubernetes.io/projected/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-kube-api-access-wpsdh\") pod \"certified-operators-m24zv\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.581227 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:06 crc kubenswrapper[4873]: I0121 00:32:06.835482 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m24zv"] Jan 21 00:32:06 crc kubenswrapper[4873]: W0121 00:32:06.842276 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab80e924_2742_4efc_b7e0_d9ffbf664ffe.slice/crio-aa31bb93b5d98b29f4d3f6e3e104c6e80e45c35b33dbc5916cd87e7d425acdc2 WatchSource:0}: Error finding container aa31bb93b5d98b29f4d3f6e3e104c6e80e45c35b33dbc5916cd87e7d425acdc2: Status 404 returned error can't find the container with id aa31bb93b5d98b29f4d3f6e3e104c6e80e45c35b33dbc5916cd87e7d425acdc2 Jan 21 00:32:07 crc kubenswrapper[4873]: I0121 00:32:07.088506 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24zv" event={"ID":"ab80e924-2742-4efc-b7e0-d9ffbf664ffe","Type":"ContainerStarted","Data":"aa31bb93b5d98b29f4d3f6e3e104c6e80e45c35b33dbc5916cd87e7d425acdc2"} Jan 21 00:32:08 crc kubenswrapper[4873]: I0121 00:32:08.102111 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerID="88be35d2e92d2dd4dfb90af0861c8a3a531c2b8c28e0a92b7ac6a6f08e1186c7" exitCode=0 Jan 21 00:32:08 crc kubenswrapper[4873]: I0121 00:32:08.102162 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24zv" event={"ID":"ab80e924-2742-4efc-b7e0-d9ffbf664ffe","Type":"ContainerDied","Data":"88be35d2e92d2dd4dfb90af0861c8a3a531c2b8c28e0a92b7ac6a6f08e1186c7"} Jan 21 00:32:08 crc kubenswrapper[4873]: I0121 00:32:08.857527 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:32:08 crc kubenswrapper[4873]: I0121 00:32:08.857581 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:32:08 crc kubenswrapper[4873]: I0121 00:32:08.897617 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:32:09 crc kubenswrapper[4873]: I0121 00:32:09.112416 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24zv" event={"ID":"ab80e924-2742-4efc-b7e0-d9ffbf664ffe","Type":"ContainerStarted","Data":"e813f9c0a8550c9ac76eed2a524eee48e41e14621c19d1a82a349eabeb8e39ff"} Jan 21 00:32:09 crc kubenswrapper[4873]: I0121 00:32:09.161547 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:32:10 crc kubenswrapper[4873]: I0121 00:32:10.122124 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerID="e813f9c0a8550c9ac76eed2a524eee48e41e14621c19d1a82a349eabeb8e39ff" exitCode=0 Jan 21 00:32:10 crc kubenswrapper[4873]: I0121 00:32:10.122254 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24zv" event={"ID":"ab80e924-2742-4efc-b7e0-d9ffbf664ffe","Type":"ContainerDied","Data":"e813f9c0a8550c9ac76eed2a524eee48e41e14621c19d1a82a349eabeb8e39ff"} Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.063470 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:32:11 crc kubenswrapper[4873]: E0121 00:32:11.064331 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.132379 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24zv" event={"ID":"ab80e924-2742-4efc-b7e0-d9ffbf664ffe","Type":"ContainerStarted","Data":"3119ca4eb2efa9b6bb131792c1433ae7783f6504736fa632b0bb03a09f7344aa"} Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.149924 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m24zv" podStartSLOduration=2.693049846 podStartE2EDuration="5.149901494s" podCreationTimestamp="2026-01-21 00:32:06 +0000 UTC" firstStartedPulling="2026-01-21 00:32:08.10471507 +0000 UTC m=+1560.344582726" lastFinishedPulling="2026-01-21 00:32:10.561566718 +0000 UTC m=+1562.801434374" observedRunningTime="2026-01-21 00:32:11.149290578 +0000 UTC m=+1563.389158224" watchObservedRunningTime="2026-01-21 00:32:11.149901494 +0000 UTC m=+1563.389769140" Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.217935 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6nq8t"] Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.218134 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6nq8t" podUID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerName="registry-server" containerID="cri-o://ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45" gracePeriod=2 Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.562672 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.676055 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjm88\" (UniqueName: \"kubernetes.io/projected/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-kube-api-access-rjm88\") pod \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.676099 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-utilities\") pod \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.676207 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-catalog-content\") pod \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\" (UID: \"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a\") " Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.676882 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-utilities" (OuterVolumeSpecName: "utilities") pod "7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" (UID: "7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.686738 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-kube-api-access-rjm88" (OuterVolumeSpecName: "kube-api-access-rjm88") pod "7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" (UID: "7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a"). InnerVolumeSpecName "kube-api-access-rjm88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.731017 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" (UID: "7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.778002 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.778045 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:32:11 crc kubenswrapper[4873]: I0121 00:32:11.778062 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjm88\" (UniqueName: \"kubernetes.io/projected/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a-kube-api-access-rjm88\") on node \"crc\" DevicePath \"\"" Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.142282 4873 generic.go:334] "Generic (PLEG): container finished" podID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerID="ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45" exitCode=0 Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.142385 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nq8t" Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.142419 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8t" event={"ID":"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a","Type":"ContainerDied","Data":"ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45"} Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.143366 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8t" event={"ID":"7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a","Type":"ContainerDied","Data":"bfc61ea1eda46ecad8fbffa78ecc43c55f99c03169e99b5c524ca28f10cc8f1c"} Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.143407 4873 scope.go:117] "RemoveContainer" containerID="ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45" Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.166378 4873 scope.go:117] "RemoveContainer" containerID="6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0" Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.169768 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6nq8t"] Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.175377 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6nq8t"] Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.189611 4873 scope.go:117] "RemoveContainer" containerID="b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2" Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.213255 4873 scope.go:117] "RemoveContainer" containerID="ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45" Jan 21 00:32:12 crc kubenswrapper[4873]: E0121 00:32:12.213695 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45\": container with ID starting with ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45 not found: ID does not exist" containerID="ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45" Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.213833 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45"} err="failed to get container status \"ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45\": rpc error: code = NotFound desc = could not find container \"ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45\": container with ID starting with ee2d74cf9d16b52f8dea54eb5cbb4610647e5681a75928b263cafcc1233a8b45 not found: ID does not exist" Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.213935 4873 scope.go:117] "RemoveContainer" containerID="6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0" Jan 21 00:32:12 crc kubenswrapper[4873]: E0121 00:32:12.214402 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0\": container with ID starting with 6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0 not found: ID does not exist" containerID="6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0" Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.214493 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0"} err="failed to get container status \"6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0\": rpc error: code = NotFound desc = could not find container \"6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0\": container with ID starting with 6ddee222d261c167a87f5abd1d95779605a7bdee16d2824c9eacd696ef0d1ae0 not found: ID does not exist" Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.214590 4873 scope.go:117] "RemoveContainer" containerID="b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2" Jan 21 00:32:12 crc kubenswrapper[4873]: E0121 00:32:12.214870 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2\": container with ID starting with b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2 not found: ID does not exist" containerID="b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2" Jan 21 00:32:12 crc kubenswrapper[4873]: I0121 00:32:12.214962 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2"} err="failed to get container status \"b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2\": rpc error: code = NotFound desc = could not find container \"b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2\": container with ID starting with b150616c62cce12e3a60bfde2dbf8bf1fbee44c99cc2094f0cd18f94951d7ce2 not found: ID does not exist" Jan 21 00:32:14 crc kubenswrapper[4873]: I0121 00:32:14.081785 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" path="/var/lib/kubelet/pods/7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a/volumes" Jan 21 00:32:16 crc kubenswrapper[4873]: I0121 00:32:16.582019 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:16 crc kubenswrapper[4873]: I0121 00:32:16.583514 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:16 crc kubenswrapper[4873]: I0121 00:32:16.697878 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:17 crc kubenswrapper[4873]: I0121 00:32:17.245543 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:17 crc kubenswrapper[4873]: I0121 00:32:17.292855 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m24zv"] Jan 21 00:32:19 crc kubenswrapper[4873]: I0121 00:32:19.208188 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m24zv" podUID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerName="registry-server" containerID="cri-o://3119ca4eb2efa9b6bb131792c1433ae7783f6504736fa632b0bb03a09f7344aa" gracePeriod=2 Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.219509 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerID="3119ca4eb2efa9b6bb131792c1433ae7783f6504736fa632b0bb03a09f7344aa" exitCode=0 Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.219748 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24zv" event={"ID":"ab80e924-2742-4efc-b7e0-d9ffbf664ffe","Type":"ContainerDied","Data":"3119ca4eb2efa9b6bb131792c1433ae7783f6504736fa632b0bb03a09f7344aa"} Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.795718 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.825145 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpsdh\" (UniqueName: \"kubernetes.io/projected/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-kube-api-access-wpsdh\") pod \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.825241 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-utilities\") pod \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.825311 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-catalog-content\") pod \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\" (UID: \"ab80e924-2742-4efc-b7e0-d9ffbf664ffe\") " Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.826471 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-utilities" (OuterVolumeSpecName: "utilities") pod "ab80e924-2742-4efc-b7e0-d9ffbf664ffe" (UID: "ab80e924-2742-4efc-b7e0-d9ffbf664ffe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.833987 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-kube-api-access-wpsdh" (OuterVolumeSpecName: "kube-api-access-wpsdh") pod "ab80e924-2742-4efc-b7e0-d9ffbf664ffe" (UID: "ab80e924-2742-4efc-b7e0-d9ffbf664ffe"). InnerVolumeSpecName "kube-api-access-wpsdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.887881 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab80e924-2742-4efc-b7e0-d9ffbf664ffe" (UID: "ab80e924-2742-4efc-b7e0-d9ffbf664ffe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.927068 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpsdh\" (UniqueName: \"kubernetes.io/projected/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-kube-api-access-wpsdh\") on node \"crc\" DevicePath \"\"" Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.927102 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:32:20 crc kubenswrapper[4873]: I0121 00:32:20.927115 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab80e924-2742-4efc-b7e0-d9ffbf664ffe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:32:21 crc kubenswrapper[4873]: I0121 00:32:21.225523 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m24zv" event={"ID":"ab80e924-2742-4efc-b7e0-d9ffbf664ffe","Type":"ContainerDied","Data":"aa31bb93b5d98b29f4d3f6e3e104c6e80e45c35b33dbc5916cd87e7d425acdc2"} Jan 21 00:32:21 crc kubenswrapper[4873]: I0121 00:32:21.225597 4873 scope.go:117] "RemoveContainer" containerID="3119ca4eb2efa9b6bb131792c1433ae7783f6504736fa632b0bb03a09f7344aa" Jan 21 00:32:21 crc kubenswrapper[4873]: I0121 00:32:21.225721 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m24zv" Jan 21 00:32:21 crc kubenswrapper[4873]: I0121 00:32:21.257752 4873 scope.go:117] "RemoveContainer" containerID="e813f9c0a8550c9ac76eed2a524eee48e41e14621c19d1a82a349eabeb8e39ff" Jan 21 00:32:21 crc kubenswrapper[4873]: I0121 00:32:21.261190 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m24zv"] Jan 21 00:32:21 crc kubenswrapper[4873]: I0121 00:32:21.273809 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m24zv"] Jan 21 00:32:21 crc kubenswrapper[4873]: I0121 00:32:21.290666 4873 scope.go:117] "RemoveContainer" containerID="88be35d2e92d2dd4dfb90af0861c8a3a531c2b8c28e0a92b7ac6a6f08e1186c7" Jan 21 00:32:22 crc kubenswrapper[4873]: I0121 00:32:22.075981 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" path="/var/lib/kubelet/pods/ab80e924-2742-4efc-b7e0-d9ffbf664ffe/volumes" Jan 21 00:32:23 crc kubenswrapper[4873]: I0121 00:32:23.063733 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:32:23 crc kubenswrapper[4873]: E0121 00:32:23.064280 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:32:36 crc kubenswrapper[4873]: I0121 00:32:36.063332 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:32:36 crc kubenswrapper[4873]: E0121 00:32:36.064315 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:32:47 crc kubenswrapper[4873]: I0121 00:32:47.065410 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:32:47 crc kubenswrapper[4873]: E0121 00:32:47.067377 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.816406 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q7nd4"] Jan 21 00:32:54 crc kubenswrapper[4873]: E0121 00:32:54.817700 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerName="registry-server" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.817733 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerName="registry-server" Jan 21 00:32:54 crc kubenswrapper[4873]: E0121 00:32:54.817777 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerName="extract-utilities" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.817795 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerName="extract-utilities" Jan 21 00:32:54 crc kubenswrapper[4873]: E0121 00:32:54.817816 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerName="extract-content" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.817832 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerName="extract-content" Jan 21 00:32:54 crc kubenswrapper[4873]: E0121 00:32:54.817858 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerName="registry-server" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.817873 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerName="registry-server" Jan 21 00:32:54 crc kubenswrapper[4873]: E0121 00:32:54.817900 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerName="extract-utilities" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.817917 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerName="extract-utilities" Jan 21 00:32:54 crc kubenswrapper[4873]: E0121 00:32:54.817942 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerName="extract-content" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.817958 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerName="extract-content" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.818274 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c63eae7-8ec4-4de6-aa38-a0146fdf3c4a" containerName="registry-server" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.818315 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab80e924-2742-4efc-b7e0-d9ffbf664ffe" containerName="registry-server" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.820240 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.834414 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7nd4"] Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.957688 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-utilities\") pod \"redhat-operators-q7nd4\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.957797 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-catalog-content\") pod \"redhat-operators-q7nd4\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:54 crc kubenswrapper[4873]: I0121 00:32:54.958209 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz9mx\" (UniqueName: \"kubernetes.io/projected/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-kube-api-access-kz9mx\") pod \"redhat-operators-q7nd4\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:55 crc kubenswrapper[4873]: I0121 00:32:55.059233 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-utilities\") pod \"redhat-operators-q7nd4\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:55 crc kubenswrapper[4873]: I0121 00:32:55.059283 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-catalog-content\") pod \"redhat-operators-q7nd4\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:55 crc kubenswrapper[4873]: I0121 00:32:55.059300 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz9mx\" (UniqueName: \"kubernetes.io/projected/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-kube-api-access-kz9mx\") pod \"redhat-operators-q7nd4\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:55 crc kubenswrapper[4873]: I0121 00:32:55.060088 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-utilities\") pod \"redhat-operators-q7nd4\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:55 crc kubenswrapper[4873]: I0121 00:32:55.060335 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-catalog-content\") pod \"redhat-operators-q7nd4\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:55 crc kubenswrapper[4873]: I0121 00:32:55.083040 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz9mx\" (UniqueName: \"kubernetes.io/projected/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-kube-api-access-kz9mx\") pod \"redhat-operators-q7nd4\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:55 crc kubenswrapper[4873]: I0121 00:32:55.161506 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:32:55 crc kubenswrapper[4873]: I0121 00:32:55.370903 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7nd4"] Jan 21 00:32:55 crc kubenswrapper[4873]: I0121 00:32:55.471950 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7nd4" event={"ID":"3b63d68c-74e6-483e-9f9c-aa98f09ed99b","Type":"ContainerStarted","Data":"7093e723d6b532154024c134ec5c15cf5a0debc2b659cc8d62e40e75b588a52f"} Jan 21 00:32:56 crc kubenswrapper[4873]: I0121 00:32:56.480251 4873 generic.go:334] "Generic (PLEG): container finished" podID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerID="e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e" exitCode=0 Jan 21 00:32:56 crc kubenswrapper[4873]: I0121 00:32:56.480402 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7nd4" event={"ID":"3b63d68c-74e6-483e-9f9c-aa98f09ed99b","Type":"ContainerDied","Data":"e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e"} Jan 21 00:32:57 crc kubenswrapper[4873]: I0121 00:32:57.488560 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7nd4" event={"ID":"3b63d68c-74e6-483e-9f9c-aa98f09ed99b","Type":"ContainerStarted","Data":"1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133"} Jan 21 00:32:58 crc kubenswrapper[4873]: I0121 00:32:58.497138 4873 generic.go:334] "Generic (PLEG): container finished" podID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerID="1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133" exitCode=0 Jan 21 00:32:58 crc kubenswrapper[4873]: I0121 00:32:58.497219 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7nd4" event={"ID":"3b63d68c-74e6-483e-9f9c-aa98f09ed99b","Type":"ContainerDied","Data":"1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133"} Jan 21 00:32:59 crc kubenswrapper[4873]: I0121 00:32:59.504897 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7nd4" event={"ID":"3b63d68c-74e6-483e-9f9c-aa98f09ed99b","Type":"ContainerStarted","Data":"ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818"} Jan 21 00:32:59 crc kubenswrapper[4873]: I0121 00:32:59.571042 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q7nd4" podStartSLOduration=3.143528528 podStartE2EDuration="5.571023418s" podCreationTimestamp="2026-01-21 00:32:54 +0000 UTC" firstStartedPulling="2026-01-21 00:32:56.482301202 +0000 UTC m=+1608.722168848" lastFinishedPulling="2026-01-21 00:32:58.909796102 +0000 UTC m=+1611.149663738" observedRunningTime="2026-01-21 00:32:59.565131298 +0000 UTC m=+1611.804998994" watchObservedRunningTime="2026-01-21 00:32:59.571023418 +0000 UTC m=+1611.810891074" Jan 21 00:33:02 crc kubenswrapper[4873]: I0121 00:33:02.063869 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:33:02 crc kubenswrapper[4873]: E0121 00:33:02.064246 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:33:05 crc kubenswrapper[4873]: I0121 00:33:05.163286 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:33:05 crc kubenswrapper[4873]: I0121 00:33:05.163663 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:33:06 crc kubenswrapper[4873]: I0121 00:33:06.224470 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q7nd4" podUID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerName="registry-server" probeResult="failure" output=< Jan 21 00:33:06 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Jan 21 00:33:06 crc kubenswrapper[4873]: > Jan 21 00:33:15 crc kubenswrapper[4873]: I0121 00:33:15.063244 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:33:15 crc kubenswrapper[4873]: E0121 00:33:15.063765 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:33:15 crc kubenswrapper[4873]: I0121 00:33:15.213503 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:33:15 crc kubenswrapper[4873]: I0121 00:33:15.269394 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:33:15 crc kubenswrapper[4873]: I0121 00:33:15.454640 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7nd4"] Jan 21 00:33:16 crc kubenswrapper[4873]: I0121 00:33:16.658703 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q7nd4" podUID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerName="registry-server" containerID="cri-o://ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818" gracePeriod=2 Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.183053 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.244907 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz9mx\" (UniqueName: \"kubernetes.io/projected/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-kube-api-access-kz9mx\") pod \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.244966 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-utilities\") pod \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.245014 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-catalog-content\") pod \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\" (UID: \"3b63d68c-74e6-483e-9f9c-aa98f09ed99b\") " Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.246493 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-utilities" (OuterVolumeSpecName: "utilities") pod "3b63d68c-74e6-483e-9f9c-aa98f09ed99b" (UID: "3b63d68c-74e6-483e-9f9c-aa98f09ed99b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.250710 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-kube-api-access-kz9mx" (OuterVolumeSpecName: "kube-api-access-kz9mx") pod "3b63d68c-74e6-483e-9f9c-aa98f09ed99b" (UID: "3b63d68c-74e6-483e-9f9c-aa98f09ed99b"). InnerVolumeSpecName "kube-api-access-kz9mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.345834 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz9mx\" (UniqueName: \"kubernetes.io/projected/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-kube-api-access-kz9mx\") on node \"crc\" DevicePath \"\"" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.345867 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.374963 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b63d68c-74e6-483e-9f9c-aa98f09ed99b" (UID: "3b63d68c-74e6-483e-9f9c-aa98f09ed99b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.447296 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b63d68c-74e6-483e-9f9c-aa98f09ed99b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.674654 4873 generic.go:334] "Generic (PLEG): container finished" podID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerID="ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818" exitCode=0 Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.674717 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7nd4" event={"ID":"3b63d68c-74e6-483e-9f9c-aa98f09ed99b","Type":"ContainerDied","Data":"ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818"} Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.674758 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7nd4" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.674788 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7nd4" event={"ID":"3b63d68c-74e6-483e-9f9c-aa98f09ed99b","Type":"ContainerDied","Data":"7093e723d6b532154024c134ec5c15cf5a0debc2b659cc8d62e40e75b588a52f"} Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.674817 4873 scope.go:117] "RemoveContainer" containerID="ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.716046 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7nd4"] Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.716791 4873 scope.go:117] "RemoveContainer" containerID="1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.723984 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q7nd4"] Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.752328 4873 scope.go:117] "RemoveContainer" containerID="e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.770518 4873 scope.go:117] "RemoveContainer" containerID="ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818" Jan 21 00:33:18 crc kubenswrapper[4873]: E0121 00:33:18.770858 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818\": container with ID starting with ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818 not found: ID does not exist" containerID="ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.770901 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818"} err="failed to get container status \"ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818\": rpc error: code = NotFound desc = could not find container \"ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818\": container with ID starting with ebd487431012a32eef5f5fba850c07c04b34739c938fa257ff4dd44e9c5b9818 not found: ID does not exist" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.770933 4873 scope.go:117] "RemoveContainer" containerID="1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133" Jan 21 00:33:18 crc kubenswrapper[4873]: E0121 00:33:18.771270 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133\": container with ID starting with 1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133 not found: ID does not exist" containerID="1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.771364 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133"} err="failed to get container status \"1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133\": rpc error: code = NotFound desc = could not find container \"1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133\": container with ID starting with 1d52d1f3e462864ab2764e7182f437f5a87e0a81514658c83d6639cb4b773133 not found: ID does not exist" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.771400 4873 scope.go:117] "RemoveContainer" containerID="e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e" Jan 21 00:33:18 crc kubenswrapper[4873]: E0121 00:33:18.772870 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e\": container with ID starting with e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e not found: ID does not exist" containerID="e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e" Jan 21 00:33:18 crc kubenswrapper[4873]: I0121 00:33:18.772910 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e"} err="failed to get container status \"e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e\": rpc error: code = NotFound desc = could not find container \"e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e\": container with ID starting with e096b99a6d40a8e0352d85643b788634ab1ece064aaafec3a3831efa31a81d6e not found: ID does not exist" Jan 21 00:33:20 crc kubenswrapper[4873]: I0121 00:33:20.078014 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" path="/var/lib/kubelet/pods/3b63d68c-74e6-483e-9f9c-aa98f09ed99b/volumes" Jan 21 00:33:29 crc kubenswrapper[4873]: I0121 00:33:29.065034 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:33:29 crc kubenswrapper[4873]: E0121 00:33:29.065853 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:33:43 crc kubenswrapper[4873]: I0121 00:33:43.064439 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:33:43 crc kubenswrapper[4873]: E0121 00:33:43.065457 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:33:54 crc kubenswrapper[4873]: I0121 00:33:54.067940 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:33:54 crc kubenswrapper[4873]: E0121 00:33:54.068808 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:34:07 crc kubenswrapper[4873]: I0121 00:34:07.064514 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:34:07 crc kubenswrapper[4873]: E0121 00:34:07.065341 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:34:22 crc kubenswrapper[4873]: I0121 00:34:22.066354 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:34:22 crc kubenswrapper[4873]: E0121 00:34:22.067504 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:34:34 crc kubenswrapper[4873]: I0121 00:34:34.063924 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:34:34 crc kubenswrapper[4873]: E0121 00:34:34.064776 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:34:45 crc kubenswrapper[4873]: I0121 00:34:45.063662 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:34:45 crc kubenswrapper[4873]: E0121 00:34:45.064558 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:35:00 crc kubenswrapper[4873]: I0121 00:35:00.064004 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:35:00 crc kubenswrapper[4873]: E0121 00:35:00.064516 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:35:11 crc kubenswrapper[4873]: I0121 00:35:11.064243 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:35:11 crc kubenswrapper[4873]: E0121 00:35:11.065342 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:35:26 crc kubenswrapper[4873]: I0121 00:35:26.064060 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:35:26 crc kubenswrapper[4873]: E0121 00:35:26.066114 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:35:37 crc kubenswrapper[4873]: I0121 00:35:37.063471 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:35:37 crc kubenswrapper[4873]: E0121 00:35:37.064633 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:35:49 crc kubenswrapper[4873]: I0121 00:35:49.064034 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:35:49 crc kubenswrapper[4873]: E0121 00:35:49.065030 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:36:04 crc kubenswrapper[4873]: I0121 00:36:04.062963 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:36:04 crc kubenswrapper[4873]: E0121 00:36:04.063617 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:36:16 crc kubenswrapper[4873]: I0121 00:36:16.064474 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:36:16 crc kubenswrapper[4873]: E0121 00:36:16.065495 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:36:31 crc kubenswrapper[4873]: I0121 00:36:31.066611 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:36:31 crc kubenswrapper[4873]: E0121 00:36:31.069173 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:36:45 crc kubenswrapper[4873]: I0121 00:36:45.064829 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:36:46 crc kubenswrapper[4873]: I0121 00:36:46.243925 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"713f503ffeacf566ebe29a1c4387194b6ecb8eb8826c50bc3dfa2b8dad9927f9"} Jan 21 00:39:01 crc kubenswrapper[4873]: I0121 00:39:01.630352 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:39:01 crc kubenswrapper[4873]: I0121 00:39:01.631073 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:39:31 crc kubenswrapper[4873]: I0121 00:39:31.630152 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:39:31 crc kubenswrapper[4873]: I0121 00:39:31.632745 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:40:01 crc kubenswrapper[4873]: I0121 00:40:01.631203 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:40:01 crc kubenswrapper[4873]: I0121 00:40:01.632267 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:40:01 crc kubenswrapper[4873]: I0121 00:40:01.632351 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:40:01 crc kubenswrapper[4873]: I0121 00:40:01.633279 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"713f503ffeacf566ebe29a1c4387194b6ecb8eb8826c50bc3dfa2b8dad9927f9"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:40:01 crc kubenswrapper[4873]: I0121 00:40:01.633346 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://713f503ffeacf566ebe29a1c4387194b6ecb8eb8826c50bc3dfa2b8dad9927f9" gracePeriod=600 Jan 21 00:40:01 crc kubenswrapper[4873]: I0121 00:40:01.790763 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="713f503ffeacf566ebe29a1c4387194b6ecb8eb8826c50bc3dfa2b8dad9927f9" exitCode=0 Jan 21 00:40:01 crc kubenswrapper[4873]: I0121 00:40:01.790813 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"713f503ffeacf566ebe29a1c4387194b6ecb8eb8826c50bc3dfa2b8dad9927f9"} Jan 21 00:40:01 crc kubenswrapper[4873]: I0121 00:40:01.791123 4873 scope.go:117] "RemoveContainer" containerID="e4bbea3028b855e97ac1fa043341e0f6321eb504686a368e7b8e3be1a1306d86" Jan 21 00:40:02 crc kubenswrapper[4873]: I0121 00:40:02.801344 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346"} Jan 21 00:42:01 crc kubenswrapper[4873]: I0121 00:42:01.630295 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:42:01 crc kubenswrapper[4873]: I0121 00:42:01.630797 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.504212 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dnzdm"] Jan 21 00:42:22 crc kubenswrapper[4873]: E0121 00:42:22.505059 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerName="extract-content" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.505074 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerName="extract-content" Jan 21 00:42:22 crc kubenswrapper[4873]: E0121 00:42:22.505093 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerName="registry-server" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.505101 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerName="registry-server" Jan 21 00:42:22 crc kubenswrapper[4873]: E0121 00:42:22.505124 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerName="extract-utilities" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.505132 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerName="extract-utilities" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.505256 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b63d68c-74e6-483e-9f9c-aa98f09ed99b" containerName="registry-server" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.506301 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.515267 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnzdm"] Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.603769 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtf4b\" (UniqueName: \"kubernetes.io/projected/683059c8-ddb6-4372-98f5-ab31d21a37a0-kube-api-access-gtf4b\") pod \"certified-operators-dnzdm\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.603901 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-utilities\") pod \"certified-operators-dnzdm\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.603944 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-catalog-content\") pod \"certified-operators-dnzdm\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.704921 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtf4b\" (UniqueName: \"kubernetes.io/projected/683059c8-ddb6-4372-98f5-ab31d21a37a0-kube-api-access-gtf4b\") pod \"certified-operators-dnzdm\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.704997 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-utilities\") pod \"certified-operators-dnzdm\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.705015 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-catalog-content\") pod \"certified-operators-dnzdm\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.705468 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-catalog-content\") pod \"certified-operators-dnzdm\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.705816 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-utilities\") pod \"certified-operators-dnzdm\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.728774 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtf4b\" (UniqueName: \"kubernetes.io/projected/683059c8-ddb6-4372-98f5-ab31d21a37a0-kube-api-access-gtf4b\") pod \"certified-operators-dnzdm\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:22 crc kubenswrapper[4873]: I0121 00:42:22.829204 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:23 crc kubenswrapper[4873]: I0121 00:42:23.304006 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnzdm"] Jan 21 00:42:23 crc kubenswrapper[4873]: I0121 00:42:23.833659 4873 generic.go:334] "Generic (PLEG): container finished" podID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerID="5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49" exitCode=0 Jan 21 00:42:23 crc kubenswrapper[4873]: I0121 00:42:23.833721 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnzdm" event={"ID":"683059c8-ddb6-4372-98f5-ab31d21a37a0","Type":"ContainerDied","Data":"5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49"} Jan 21 00:42:23 crc kubenswrapper[4873]: I0121 00:42:23.833789 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnzdm" event={"ID":"683059c8-ddb6-4372-98f5-ab31d21a37a0","Type":"ContainerStarted","Data":"d8d5d744070925fab78ca9f2bc03df8f09170dcc5e315575487ab964d343dec8"} Jan 21 00:42:23 crc kubenswrapper[4873]: I0121 00:42:23.835727 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:42:24 crc kubenswrapper[4873]: I0121 00:42:24.843511 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnzdm" event={"ID":"683059c8-ddb6-4372-98f5-ab31d21a37a0","Type":"ContainerStarted","Data":"5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627"} Jan 21 00:42:25 crc kubenswrapper[4873]: I0121 00:42:25.853186 4873 generic.go:334] "Generic (PLEG): container finished" podID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerID="5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627" exitCode=0 Jan 21 00:42:25 crc kubenswrapper[4873]: I0121 00:42:25.853236 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnzdm" event={"ID":"683059c8-ddb6-4372-98f5-ab31d21a37a0","Type":"ContainerDied","Data":"5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627"} Jan 21 00:42:26 crc kubenswrapper[4873]: I0121 00:42:26.864479 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnzdm" event={"ID":"683059c8-ddb6-4372-98f5-ab31d21a37a0","Type":"ContainerStarted","Data":"1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865"} Jan 21 00:42:26 crc kubenswrapper[4873]: I0121 00:42:26.887055 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dnzdm" podStartSLOduration=2.446101946 podStartE2EDuration="4.887028075s" podCreationTimestamp="2026-01-21 00:42:22 +0000 UTC" firstStartedPulling="2026-01-21 00:42:23.835303362 +0000 UTC m=+2176.075171038" lastFinishedPulling="2026-01-21 00:42:26.276229481 +0000 UTC m=+2178.516097167" observedRunningTime="2026-01-21 00:42:26.883816628 +0000 UTC m=+2179.123684284" watchObservedRunningTime="2026-01-21 00:42:26.887028075 +0000 UTC m=+2179.126895751" Jan 21 00:42:31 crc kubenswrapper[4873]: I0121 00:42:31.630848 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:42:31 crc kubenswrapper[4873]: I0121 00:42:31.633060 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:42:32 crc kubenswrapper[4873]: I0121 00:42:32.830210 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:32 crc kubenswrapper[4873]: I0121 00:42:32.830263 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:32 crc kubenswrapper[4873]: I0121 00:42:32.889476 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:32 crc kubenswrapper[4873]: I0121 00:42:32.986620 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:33 crc kubenswrapper[4873]: I0121 00:42:33.129889 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnzdm"] Jan 21 00:42:34 crc kubenswrapper[4873]: I0121 00:42:34.944996 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dnzdm" podUID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerName="registry-server" containerID="cri-o://1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865" gracePeriod=2 Jan 21 00:42:35 crc kubenswrapper[4873]: I0121 00:42:35.901177 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:35 crc kubenswrapper[4873]: I0121 00:42:35.955027 4873 generic.go:334] "Generic (PLEG): container finished" podID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerID="1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865" exitCode=0 Jan 21 00:42:35 crc kubenswrapper[4873]: I0121 00:42:35.955105 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnzdm" event={"ID":"683059c8-ddb6-4372-98f5-ab31d21a37a0","Type":"ContainerDied","Data":"1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865"} Jan 21 00:42:35 crc kubenswrapper[4873]: I0121 00:42:35.955136 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnzdm" event={"ID":"683059c8-ddb6-4372-98f5-ab31d21a37a0","Type":"ContainerDied","Data":"d8d5d744070925fab78ca9f2bc03df8f09170dcc5e315575487ab964d343dec8"} Jan 21 00:42:35 crc kubenswrapper[4873]: I0121 00:42:35.955157 4873 scope.go:117] "RemoveContainer" containerID="1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865" Jan 21 00:42:35 crc kubenswrapper[4873]: I0121 00:42:35.955130 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnzdm" Jan 21 00:42:35 crc kubenswrapper[4873]: I0121 00:42:35.976275 4873 scope.go:117] "RemoveContainer" containerID="5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627" Jan 21 00:42:35 crc kubenswrapper[4873]: I0121 00:42:35.996655 4873 scope.go:117] "RemoveContainer" containerID="5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.014854 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-catalog-content\") pod \"683059c8-ddb6-4372-98f5-ab31d21a37a0\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.014969 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtf4b\" (UniqueName: \"kubernetes.io/projected/683059c8-ddb6-4372-98f5-ab31d21a37a0-kube-api-access-gtf4b\") pod \"683059c8-ddb6-4372-98f5-ab31d21a37a0\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.014999 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-utilities\") pod \"683059c8-ddb6-4372-98f5-ab31d21a37a0\" (UID: \"683059c8-ddb6-4372-98f5-ab31d21a37a0\") " Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.016287 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-utilities" (OuterVolumeSpecName: "utilities") pod "683059c8-ddb6-4372-98f5-ab31d21a37a0" (UID: "683059c8-ddb6-4372-98f5-ab31d21a37a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.021031 4873 scope.go:117] "RemoveContainer" containerID="1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.021305 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683059c8-ddb6-4372-98f5-ab31d21a37a0-kube-api-access-gtf4b" (OuterVolumeSpecName: "kube-api-access-gtf4b") pod "683059c8-ddb6-4372-98f5-ab31d21a37a0" (UID: "683059c8-ddb6-4372-98f5-ab31d21a37a0"). InnerVolumeSpecName "kube-api-access-gtf4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:42:36 crc kubenswrapper[4873]: E0121 00:42:36.021453 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865\": container with ID starting with 1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865 not found: ID does not exist" containerID="1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.021492 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865"} err="failed to get container status \"1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865\": rpc error: code = NotFound desc = could not find container \"1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865\": container with ID starting with 1fbe79096056d2c988070347989c63538df2c01d3daf44a6a2ca95a103ad9865 not found: ID does not exist" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.021517 4873 scope.go:117] "RemoveContainer" containerID="5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627" Jan 21 00:42:36 crc kubenswrapper[4873]: E0121 00:42:36.022798 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627\": container with ID starting with 5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627 not found: ID does not exist" containerID="5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.022850 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627"} err="failed to get container status \"5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627\": rpc error: code = NotFound desc = could not find container \"5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627\": container with ID starting with 5f37f87f78e1cc5090586c8191f702386fd1362434130a9c59153300aa460627 not found: ID does not exist" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.022884 4873 scope.go:117] "RemoveContainer" containerID="5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49" Jan 21 00:42:36 crc kubenswrapper[4873]: E0121 00:42:36.023412 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49\": container with ID starting with 5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49 not found: ID does not exist" containerID="5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.023449 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49"} err="failed to get container status \"5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49\": rpc error: code = NotFound desc = could not find container \"5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49\": container with ID starting with 5c7269f047c008d029346c5f1a937822b00893fbc4f26e3cd0863884ce26ec49 not found: ID does not exist" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.094433 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "683059c8-ddb6-4372-98f5-ab31d21a37a0" (UID: "683059c8-ddb6-4372-98f5-ab31d21a37a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.116843 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtf4b\" (UniqueName: \"kubernetes.io/projected/683059c8-ddb6-4372-98f5-ab31d21a37a0-kube-api-access-gtf4b\") on node \"crc\" DevicePath \"\"" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.116869 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.116882 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/683059c8-ddb6-4372-98f5-ab31d21a37a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.305146 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnzdm"] Jan 21 00:42:36 crc kubenswrapper[4873]: I0121 00:42:36.313265 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dnzdm"] Jan 21 00:42:38 crc kubenswrapper[4873]: I0121 00:42:38.076075 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="683059c8-ddb6-4372-98f5-ab31d21a37a0" path="/var/lib/kubelet/pods/683059c8-ddb6-4372-98f5-ab31d21a37a0/volumes" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.543033 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2xzx8"] Jan 21 00:42:53 crc kubenswrapper[4873]: E0121 00:42:53.543865 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerName="registry-server" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.543959 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerName="registry-server" Jan 21 00:42:53 crc kubenswrapper[4873]: E0121 00:42:53.543979 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerName="extract-utilities" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.543987 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerName="extract-utilities" Jan 21 00:42:53 crc kubenswrapper[4873]: E0121 00:42:53.544010 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerName="extract-content" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.544017 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerName="extract-content" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.544139 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="683059c8-ddb6-4372-98f5-ab31d21a37a0" containerName="registry-server" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.546403 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.554517 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2xzx8"] Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.684611 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-utilities\") pod \"community-operators-2xzx8\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.684703 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-catalog-content\") pod \"community-operators-2xzx8\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.684773 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnvkb\" (UniqueName: \"kubernetes.io/projected/e72f727a-0a63-4f0e-83cb-64372e7f8a55-kube-api-access-wnvkb\") pod \"community-operators-2xzx8\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.785839 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-utilities\") pod \"community-operators-2xzx8\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.785902 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-catalog-content\") pod \"community-operators-2xzx8\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.785940 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvkb\" (UniqueName: \"kubernetes.io/projected/e72f727a-0a63-4f0e-83cb-64372e7f8a55-kube-api-access-wnvkb\") pod \"community-operators-2xzx8\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.786362 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-utilities\") pod \"community-operators-2xzx8\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.786585 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-catalog-content\") pod \"community-operators-2xzx8\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.809911 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvkb\" (UniqueName: \"kubernetes.io/projected/e72f727a-0a63-4f0e-83cb-64372e7f8a55-kube-api-access-wnvkb\") pod \"community-operators-2xzx8\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:53 crc kubenswrapper[4873]: I0121 00:42:53.871317 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:42:54 crc kubenswrapper[4873]: I0121 00:42:54.419372 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2xzx8"] Jan 21 00:42:55 crc kubenswrapper[4873]: I0121 00:42:55.140973 4873 generic.go:334] "Generic (PLEG): container finished" podID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerID="deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145" exitCode=0 Jan 21 00:42:55 crc kubenswrapper[4873]: I0121 00:42:55.141042 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xzx8" event={"ID":"e72f727a-0a63-4f0e-83cb-64372e7f8a55","Type":"ContainerDied","Data":"deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145"} Jan 21 00:42:55 crc kubenswrapper[4873]: I0121 00:42:55.141084 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xzx8" event={"ID":"e72f727a-0a63-4f0e-83cb-64372e7f8a55","Type":"ContainerStarted","Data":"0effe787d8db03eac196c637f0eac979a7db41cefed101fd42a11313c1b46517"} Jan 21 00:42:56 crc kubenswrapper[4873]: I0121 00:42:56.153726 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xzx8" event={"ID":"e72f727a-0a63-4f0e-83cb-64372e7f8a55","Type":"ContainerStarted","Data":"689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc"} Jan 21 00:42:57 crc kubenswrapper[4873]: I0121 00:42:57.162731 4873 generic.go:334] "Generic (PLEG): container finished" podID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerID="689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc" exitCode=0 Jan 21 00:42:57 crc kubenswrapper[4873]: I0121 00:42:57.162821 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xzx8" event={"ID":"e72f727a-0a63-4f0e-83cb-64372e7f8a55","Type":"ContainerDied","Data":"689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc"} Jan 21 00:42:58 crc kubenswrapper[4873]: I0121 00:42:58.173867 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xzx8" event={"ID":"e72f727a-0a63-4f0e-83cb-64372e7f8a55","Type":"ContainerStarted","Data":"3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558"} Jan 21 00:42:58 crc kubenswrapper[4873]: I0121 00:42:58.201392 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2xzx8" podStartSLOduration=2.7715098940000003 podStartE2EDuration="5.201374488s" podCreationTimestamp="2026-01-21 00:42:53 +0000 UTC" firstStartedPulling="2026-01-21 00:42:55.143716158 +0000 UTC m=+2207.383583844" lastFinishedPulling="2026-01-21 00:42:57.573580752 +0000 UTC m=+2209.813448438" observedRunningTime="2026-01-21 00:42:58.199771794 +0000 UTC m=+2210.439639460" watchObservedRunningTime="2026-01-21 00:42:58.201374488 +0000 UTC m=+2210.441242154" Jan 21 00:43:01 crc kubenswrapper[4873]: I0121 00:43:01.630018 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:43:01 crc kubenswrapper[4873]: I0121 00:43:01.631062 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:43:01 crc kubenswrapper[4873]: I0121 00:43:01.631190 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:43:01 crc kubenswrapper[4873]: I0121 00:43:01.631794 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:43:01 crc kubenswrapper[4873]: I0121 00:43:01.631977 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" gracePeriod=600 Jan 21 00:43:02 crc kubenswrapper[4873]: I0121 00:43:02.205191 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" exitCode=0 Jan 21 00:43:02 crc kubenswrapper[4873]: I0121 00:43:02.205241 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346"} Jan 21 00:43:02 crc kubenswrapper[4873]: I0121 00:43:02.205275 4873 scope.go:117] "RemoveContainer" containerID="713f503ffeacf566ebe29a1c4387194b6ecb8eb8826c50bc3dfa2b8dad9927f9" Jan 21 00:43:02 crc kubenswrapper[4873]: E0121 00:43:02.261064 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:43:03 crc kubenswrapper[4873]: I0121 00:43:03.218774 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:43:03 crc kubenswrapper[4873]: E0121 00:43:03.219035 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:43:03 crc kubenswrapper[4873]: I0121 00:43:03.872610 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:43:03 crc kubenswrapper[4873]: I0121 00:43:03.872673 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:43:03 crc kubenswrapper[4873]: I0121 00:43:03.929076 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:43:04 crc kubenswrapper[4873]: I0121 00:43:04.275966 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:43:04 crc kubenswrapper[4873]: I0121 00:43:04.323996 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2xzx8"] Jan 21 00:43:06 crc kubenswrapper[4873]: I0121 00:43:06.239107 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2xzx8" podUID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerName="registry-server" containerID="cri-o://3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558" gracePeriod=2 Jan 21 00:43:06 crc kubenswrapper[4873]: I0121 00:43:06.767128 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zmmxj"] Jan 21 00:43:06 crc kubenswrapper[4873]: I0121 00:43:06.771417 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:06 crc kubenswrapper[4873]: I0121 00:43:06.808851 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmmxj"] Jan 21 00:43:06 crc kubenswrapper[4873]: I0121 00:43:06.915830 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrspp\" (UniqueName: \"kubernetes.io/projected/29b0f106-5d38-4463-ac10-548367e8e50f-kube-api-access-qrspp\") pod \"redhat-operators-zmmxj\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:06 crc kubenswrapper[4873]: I0121 00:43:06.915921 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-catalog-content\") pod \"redhat-operators-zmmxj\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:06 crc kubenswrapper[4873]: I0121 00:43:06.915970 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-utilities\") pod \"redhat-operators-zmmxj\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.018242 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrspp\" (UniqueName: \"kubernetes.io/projected/29b0f106-5d38-4463-ac10-548367e8e50f-kube-api-access-qrspp\") pod \"redhat-operators-zmmxj\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.018715 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-catalog-content\") pod \"redhat-operators-zmmxj\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.019264 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-catalog-content\") pod \"redhat-operators-zmmxj\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.018752 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-utilities\") pod \"redhat-operators-zmmxj\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.019348 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-utilities\") pod \"redhat-operators-zmmxj\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.037977 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrspp\" (UniqueName: \"kubernetes.io/projected/29b0f106-5d38-4463-ac10-548367e8e50f-kube-api-access-qrspp\") pod \"redhat-operators-zmmxj\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.093656 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.166291 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.221253 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-utilities\") pod \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.221365 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-catalog-content\") pod \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.221423 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnvkb\" (UniqueName: \"kubernetes.io/projected/e72f727a-0a63-4f0e-83cb-64372e7f8a55-kube-api-access-wnvkb\") pod \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\" (UID: \"e72f727a-0a63-4f0e-83cb-64372e7f8a55\") " Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.223120 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-utilities" (OuterVolumeSpecName: "utilities") pod "e72f727a-0a63-4f0e-83cb-64372e7f8a55" (UID: "e72f727a-0a63-4f0e-83cb-64372e7f8a55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.224672 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e72f727a-0a63-4f0e-83cb-64372e7f8a55-kube-api-access-wnvkb" (OuterVolumeSpecName: "kube-api-access-wnvkb") pod "e72f727a-0a63-4f0e-83cb-64372e7f8a55" (UID: "e72f727a-0a63-4f0e-83cb-64372e7f8a55"). InnerVolumeSpecName "kube-api-access-wnvkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.251239 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xzx8" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.251457 4873 generic.go:334] "Generic (PLEG): container finished" podID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerID="3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558" exitCode=0 Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.251538 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xzx8" event={"ID":"e72f727a-0a63-4f0e-83cb-64372e7f8a55","Type":"ContainerDied","Data":"3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558"} Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.251596 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xzx8" event={"ID":"e72f727a-0a63-4f0e-83cb-64372e7f8a55","Type":"ContainerDied","Data":"0effe787d8db03eac196c637f0eac979a7db41cefed101fd42a11313c1b46517"} Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.251619 4873 scope.go:117] "RemoveContainer" containerID="3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.275450 4873 scope.go:117] "RemoveContainer" containerID="689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.282359 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e72f727a-0a63-4f0e-83cb-64372e7f8a55" (UID: "e72f727a-0a63-4f0e-83cb-64372e7f8a55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.298058 4873 scope.go:117] "RemoveContainer" containerID="deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.322432 4873 scope.go:117] "RemoveContainer" containerID="3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558" Jan 21 00:43:07 crc kubenswrapper[4873]: E0121 00:43:07.322867 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558\": container with ID starting with 3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558 not found: ID does not exist" containerID="3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.322899 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558"} err="failed to get container status \"3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558\": rpc error: code = NotFound desc = could not find container \"3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558\": container with ID starting with 3a4255e9f1cbe81409ccac69a2bf22b83fd9ca3e711f2cf266b1104c842a1558 not found: ID does not exist" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.322939 4873 scope.go:117] "RemoveContainer" containerID="689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc" Jan 21 00:43:07 crc kubenswrapper[4873]: E0121 00:43:07.323359 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc\": container with ID starting with 689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc not found: ID does not exist" containerID="689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.323397 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc"} err="failed to get container status \"689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc\": rpc error: code = NotFound desc = could not find container \"689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc\": container with ID starting with 689e04566f63c3477d57736a692d41484dc41ffebf1753a4c91b859325096dcc not found: ID does not exist" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.323408 4873 scope.go:117] "RemoveContainer" containerID="deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145" Jan 21 00:43:07 crc kubenswrapper[4873]: E0121 00:43:07.323901 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145\": container with ID starting with deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145 not found: ID does not exist" containerID="deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.323975 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145"} err="failed to get container status \"deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145\": rpc error: code = NotFound desc = could not find container \"deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145\": container with ID starting with deb0f3cee1a73273969279c27f9d44e5be05ffe800a0d25d7e8b5231171d7145 not found: ID does not exist" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.324093 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnvkb\" (UniqueName: \"kubernetes.io/projected/e72f727a-0a63-4f0e-83cb-64372e7f8a55-kube-api-access-wnvkb\") on node \"crc\" DevicePath \"\"" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.324113 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.324125 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e72f727a-0a63-4f0e-83cb-64372e7f8a55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.413970 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmmxj"] Jan 21 00:43:07 crc kubenswrapper[4873]: W0121 00:43:07.422441 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29b0f106_5d38_4463_ac10_548367e8e50f.slice/crio-046d1ee4af22e0a269dbd1423612b6c03e0ea2cb4a38438fbbad0a58becd43f0 WatchSource:0}: Error finding container 046d1ee4af22e0a269dbd1423612b6c03e0ea2cb4a38438fbbad0a58becd43f0: Status 404 returned error can't find the container with id 046d1ee4af22e0a269dbd1423612b6c03e0ea2cb4a38438fbbad0a58becd43f0 Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.587370 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2xzx8"] Jan 21 00:43:07 crc kubenswrapper[4873]: I0121 00:43:07.592867 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2xzx8"] Jan 21 00:43:08 crc kubenswrapper[4873]: I0121 00:43:08.070881 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" path="/var/lib/kubelet/pods/e72f727a-0a63-4f0e-83cb-64372e7f8a55/volumes" Jan 21 00:43:08 crc kubenswrapper[4873]: I0121 00:43:08.259239 4873 generic.go:334] "Generic (PLEG): container finished" podID="29b0f106-5d38-4463-ac10-548367e8e50f" containerID="d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d" exitCode=0 Jan 21 00:43:08 crc kubenswrapper[4873]: I0121 00:43:08.259290 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmmxj" event={"ID":"29b0f106-5d38-4463-ac10-548367e8e50f","Type":"ContainerDied","Data":"d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d"} Jan 21 00:43:08 crc kubenswrapper[4873]: I0121 00:43:08.259315 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmmxj" event={"ID":"29b0f106-5d38-4463-ac10-548367e8e50f","Type":"ContainerStarted","Data":"046d1ee4af22e0a269dbd1423612b6c03e0ea2cb4a38438fbbad0a58becd43f0"} Jan 21 00:43:10 crc kubenswrapper[4873]: I0121 00:43:10.278763 4873 generic.go:334] "Generic (PLEG): container finished" podID="29b0f106-5d38-4463-ac10-548367e8e50f" containerID="3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac" exitCode=0 Jan 21 00:43:10 crc kubenswrapper[4873]: I0121 00:43:10.279044 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmmxj" event={"ID":"29b0f106-5d38-4463-ac10-548367e8e50f","Type":"ContainerDied","Data":"3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac"} Jan 21 00:43:10 crc kubenswrapper[4873]: E0121 00:43:10.331885 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29b0f106_5d38_4463_ac10_548367e8e50f.slice/crio-3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac.scope\": RecentStats: unable to find data in memory cache]" Jan 21 00:43:11 crc kubenswrapper[4873]: I0121 00:43:11.291594 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmmxj" event={"ID":"29b0f106-5d38-4463-ac10-548367e8e50f","Type":"ContainerStarted","Data":"51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c"} Jan 21 00:43:11 crc kubenswrapper[4873]: I0121 00:43:11.310571 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zmmxj" podStartSLOduration=2.851292494 podStartE2EDuration="5.310534111s" podCreationTimestamp="2026-01-21 00:43:06 +0000 UTC" firstStartedPulling="2026-01-21 00:43:08.260979888 +0000 UTC m=+2220.500847534" lastFinishedPulling="2026-01-21 00:43:10.720221505 +0000 UTC m=+2222.960089151" observedRunningTime="2026-01-21 00:43:11.307652395 +0000 UTC m=+2223.547520091" watchObservedRunningTime="2026-01-21 00:43:11.310534111 +0000 UTC m=+2223.550401757" Jan 21 00:43:16 crc kubenswrapper[4873]: I0121 00:43:16.067161 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:43:16 crc kubenswrapper[4873]: E0121 00:43:16.067935 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:43:17 crc kubenswrapper[4873]: I0121 00:43:17.166952 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:17 crc kubenswrapper[4873]: I0121 00:43:17.167338 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:18 crc kubenswrapper[4873]: I0121 00:43:18.234613 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zmmxj" podUID="29b0f106-5d38-4463-ac10-548367e8e50f" containerName="registry-server" probeResult="failure" output=< Jan 21 00:43:18 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Jan 21 00:43:18 crc kubenswrapper[4873]: > Jan 21 00:43:27 crc kubenswrapper[4873]: I0121 00:43:27.239468 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:27 crc kubenswrapper[4873]: I0121 00:43:27.329184 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:27 crc kubenswrapper[4873]: I0121 00:43:27.489231 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmmxj"] Jan 21 00:43:28 crc kubenswrapper[4873]: I0121 00:43:28.417464 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zmmxj" podUID="29b0f106-5d38-4463-ac10-548367e8e50f" containerName="registry-server" containerID="cri-o://51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c" gracePeriod=2 Jan 21 00:43:28 crc kubenswrapper[4873]: I0121 00:43:28.843198 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:28 crc kubenswrapper[4873]: I0121 00:43:28.939601 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrspp\" (UniqueName: \"kubernetes.io/projected/29b0f106-5d38-4463-ac10-548367e8e50f-kube-api-access-qrspp\") pod \"29b0f106-5d38-4463-ac10-548367e8e50f\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " Jan 21 00:43:28 crc kubenswrapper[4873]: I0121 00:43:28.939673 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-utilities\") pod \"29b0f106-5d38-4463-ac10-548367e8e50f\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " Jan 21 00:43:28 crc kubenswrapper[4873]: I0121 00:43:28.939695 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-catalog-content\") pod \"29b0f106-5d38-4463-ac10-548367e8e50f\" (UID: \"29b0f106-5d38-4463-ac10-548367e8e50f\") " Jan 21 00:43:28 crc kubenswrapper[4873]: I0121 00:43:28.942142 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-utilities" (OuterVolumeSpecName: "utilities") pod "29b0f106-5d38-4463-ac10-548367e8e50f" (UID: "29b0f106-5d38-4463-ac10-548367e8e50f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:43:28 crc kubenswrapper[4873]: I0121 00:43:28.947911 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b0f106-5d38-4463-ac10-548367e8e50f-kube-api-access-qrspp" (OuterVolumeSpecName: "kube-api-access-qrspp") pod "29b0f106-5d38-4463-ac10-548367e8e50f" (UID: "29b0f106-5d38-4463-ac10-548367e8e50f"). InnerVolumeSpecName "kube-api-access-qrspp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.042335 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.042386 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrspp\" (UniqueName: \"kubernetes.io/projected/29b0f106-5d38-4463-ac10-548367e8e50f-kube-api-access-qrspp\") on node \"crc\" DevicePath \"\"" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.083963 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29b0f106-5d38-4463-ac10-548367e8e50f" (UID: "29b0f106-5d38-4463-ac10-548367e8e50f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.143497 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b0f106-5d38-4463-ac10-548367e8e50f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.432823 4873 generic.go:334] "Generic (PLEG): container finished" podID="29b0f106-5d38-4463-ac10-548367e8e50f" containerID="51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c" exitCode=0 Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.432872 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmmxj" event={"ID":"29b0f106-5d38-4463-ac10-548367e8e50f","Type":"ContainerDied","Data":"51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c"} Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.432905 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmmxj" event={"ID":"29b0f106-5d38-4463-ac10-548367e8e50f","Type":"ContainerDied","Data":"046d1ee4af22e0a269dbd1423612b6c03e0ea2cb4a38438fbbad0a58becd43f0"} Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.432928 4873 scope.go:117] "RemoveContainer" containerID="51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.432963 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmmxj" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.479027 4873 scope.go:117] "RemoveContainer" containerID="3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.486131 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmmxj"] Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.492820 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zmmxj"] Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.508873 4873 scope.go:117] "RemoveContainer" containerID="d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.534508 4873 scope.go:117] "RemoveContainer" containerID="51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c" Jan 21 00:43:29 crc kubenswrapper[4873]: E0121 00:43:29.535047 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c\": container with ID starting with 51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c not found: ID does not exist" containerID="51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.535103 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c"} err="failed to get container status \"51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c\": rpc error: code = NotFound desc = could not find container \"51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c\": container with ID starting with 51aec7471cf3d39db98bfb4f53e19876b9f1941b9d4dcd390b1baa52002eb50c not found: ID does not exist" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.535136 4873 scope.go:117] "RemoveContainer" containerID="3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac" Jan 21 00:43:29 crc kubenswrapper[4873]: E0121 00:43:29.535635 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac\": container with ID starting with 3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac not found: ID does not exist" containerID="3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.535719 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac"} err="failed to get container status \"3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac\": rpc error: code = NotFound desc = could not find container \"3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac\": container with ID starting with 3d681fb513efca9a6e5c4bf3608b8646c14ac28bf5ce2b6d66686a88cedcb1ac not found: ID does not exist" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.535770 4873 scope.go:117] "RemoveContainer" containerID="d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d" Jan 21 00:43:29 crc kubenswrapper[4873]: E0121 00:43:29.536213 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d\": container with ID starting with d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d not found: ID does not exist" containerID="d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d" Jan 21 00:43:29 crc kubenswrapper[4873]: I0121 00:43:29.536440 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d"} err="failed to get container status \"d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d\": rpc error: code = NotFound desc = could not find container \"d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d\": container with ID starting with d83095a120025400eab0be365d7495765c35c118b8c23f2375ba36b74c50df5d not found: ID does not exist" Jan 21 00:43:30 crc kubenswrapper[4873]: I0121 00:43:30.064348 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:43:30 crc kubenswrapper[4873]: E0121 00:43:30.064650 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:43:30 crc kubenswrapper[4873]: I0121 00:43:30.076725 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b0f106-5d38-4463-ac10-548367e8e50f" path="/var/lib/kubelet/pods/29b0f106-5d38-4463-ac10-548367e8e50f/volumes" Jan 21 00:43:43 crc kubenswrapper[4873]: I0121 00:43:43.063901 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:43:43 crc kubenswrapper[4873]: E0121 00:43:43.064706 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:43:54 crc kubenswrapper[4873]: I0121 00:43:54.064122 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:43:54 crc kubenswrapper[4873]: E0121 00:43:54.065307 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:44:06 crc kubenswrapper[4873]: I0121 00:44:06.064426 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:44:06 crc kubenswrapper[4873]: E0121 00:44:06.065774 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:44:20 crc kubenswrapper[4873]: I0121 00:44:20.064638 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:44:20 crc kubenswrapper[4873]: E0121 00:44:20.065681 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:44:35 crc kubenswrapper[4873]: I0121 00:44:35.063105 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:44:35 crc kubenswrapper[4873]: E0121 00:44:35.064150 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:44:46 crc kubenswrapper[4873]: I0121 00:44:46.069272 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:44:46 crc kubenswrapper[4873]: E0121 00:44:46.070131 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:44:59 crc kubenswrapper[4873]: I0121 00:44:59.062903 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:44:59 crc kubenswrapper[4873]: E0121 00:44:59.063663 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.156247 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn"] Jan 21 00:45:00 crc kubenswrapper[4873]: E0121 00:45:00.156730 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b0f106-5d38-4463-ac10-548367e8e50f" containerName="registry-server" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.156763 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b0f106-5d38-4463-ac10-548367e8e50f" containerName="registry-server" Jan 21 00:45:00 crc kubenswrapper[4873]: E0121 00:45:00.156798 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerName="extract-utilities" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.156816 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerName="extract-utilities" Jan 21 00:45:00 crc kubenswrapper[4873]: E0121 00:45:00.156845 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b0f106-5d38-4463-ac10-548367e8e50f" containerName="extract-content" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.156861 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b0f106-5d38-4463-ac10-548367e8e50f" containerName="extract-content" Jan 21 00:45:00 crc kubenswrapper[4873]: E0121 00:45:00.156887 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b0f106-5d38-4463-ac10-548367e8e50f" containerName="extract-utilities" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.156904 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b0f106-5d38-4463-ac10-548367e8e50f" containerName="extract-utilities" Jan 21 00:45:00 crc kubenswrapper[4873]: E0121 00:45:00.156948 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerName="registry-server" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.156966 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerName="registry-server" Jan 21 00:45:00 crc kubenswrapper[4873]: E0121 00:45:00.156989 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerName="extract-content" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.157006 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerName="extract-content" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.157268 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b0f106-5d38-4463-ac10-548367e8e50f" containerName="registry-server" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.157297 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e72f727a-0a63-4f0e-83cb-64372e7f8a55" containerName="registry-server" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.157966 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.160639 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.160691 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.165007 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn"] Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.308262 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84d16031-6f28-4798-906d-244f6f0a992e-secret-volume\") pod \"collect-profiles-29482605-h52xn\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.308340 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztxlq\" (UniqueName: \"kubernetes.io/projected/84d16031-6f28-4798-906d-244f6f0a992e-kube-api-access-ztxlq\") pod \"collect-profiles-29482605-h52xn\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.308444 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84d16031-6f28-4798-906d-244f6f0a992e-config-volume\") pod \"collect-profiles-29482605-h52xn\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.410369 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84d16031-6f28-4798-906d-244f6f0a992e-config-volume\") pod \"collect-profiles-29482605-h52xn\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.410625 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84d16031-6f28-4798-906d-244f6f0a992e-secret-volume\") pod \"collect-profiles-29482605-h52xn\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.410749 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztxlq\" (UniqueName: \"kubernetes.io/projected/84d16031-6f28-4798-906d-244f6f0a992e-kube-api-access-ztxlq\") pod \"collect-profiles-29482605-h52xn\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.412271 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84d16031-6f28-4798-906d-244f6f0a992e-config-volume\") pod \"collect-profiles-29482605-h52xn\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.420814 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84d16031-6f28-4798-906d-244f6f0a992e-secret-volume\") pod \"collect-profiles-29482605-h52xn\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.444976 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztxlq\" (UniqueName: \"kubernetes.io/projected/84d16031-6f28-4798-906d-244f6f0a992e-kube-api-access-ztxlq\") pod \"collect-profiles-29482605-h52xn\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.480642 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:00 crc kubenswrapper[4873]: I0121 00:45:00.986165 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn"] Jan 21 00:45:01 crc kubenswrapper[4873]: I0121 00:45:01.206601 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" event={"ID":"84d16031-6f28-4798-906d-244f6f0a992e","Type":"ContainerStarted","Data":"0b0b6b8e1dd0acbb685490976a7026958a283c2c2da9c723d32245af8cc1b3ad"} Jan 21 00:45:01 crc kubenswrapper[4873]: I0121 00:45:01.206658 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" event={"ID":"84d16031-6f28-4798-906d-244f6f0a992e","Type":"ContainerStarted","Data":"23f5672b39bf8cd2fa653a03ad6077dd107f9cdaa45e4913673681c772c0fe44"} Jan 21 00:45:01 crc kubenswrapper[4873]: I0121 00:45:01.226195 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" podStartSLOduration=1.226174549 podStartE2EDuration="1.226174549s" podCreationTimestamp="2026-01-21 00:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:45:01.219114078 +0000 UTC m=+2333.458981754" watchObservedRunningTime="2026-01-21 00:45:01.226174549 +0000 UTC m=+2333.466042215" Jan 21 00:45:02 crc kubenswrapper[4873]: I0121 00:45:02.216139 4873 generic.go:334] "Generic (PLEG): container finished" podID="84d16031-6f28-4798-906d-244f6f0a992e" containerID="0b0b6b8e1dd0acbb685490976a7026958a283c2c2da9c723d32245af8cc1b3ad" exitCode=0 Jan 21 00:45:02 crc kubenswrapper[4873]: I0121 00:45:02.216191 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" event={"ID":"84d16031-6f28-4798-906d-244f6f0a992e","Type":"ContainerDied","Data":"0b0b6b8e1dd0acbb685490976a7026958a283c2c2da9c723d32245af8cc1b3ad"} Jan 21 00:45:03 crc kubenswrapper[4873]: I0121 00:45:03.503786 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:03 crc kubenswrapper[4873]: I0121 00:45:03.657596 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztxlq\" (UniqueName: \"kubernetes.io/projected/84d16031-6f28-4798-906d-244f6f0a992e-kube-api-access-ztxlq\") pod \"84d16031-6f28-4798-906d-244f6f0a992e\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " Jan 21 00:45:03 crc kubenswrapper[4873]: I0121 00:45:03.657668 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84d16031-6f28-4798-906d-244f6f0a992e-config-volume\") pod \"84d16031-6f28-4798-906d-244f6f0a992e\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " Jan 21 00:45:03 crc kubenswrapper[4873]: I0121 00:45:03.657798 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84d16031-6f28-4798-906d-244f6f0a992e-secret-volume\") pod \"84d16031-6f28-4798-906d-244f6f0a992e\" (UID: \"84d16031-6f28-4798-906d-244f6f0a992e\") " Jan 21 00:45:03 crc kubenswrapper[4873]: I0121 00:45:03.658338 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84d16031-6f28-4798-906d-244f6f0a992e-config-volume" (OuterVolumeSpecName: "config-volume") pod "84d16031-6f28-4798-906d-244f6f0a992e" (UID: "84d16031-6f28-4798-906d-244f6f0a992e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 00:45:03 crc kubenswrapper[4873]: I0121 00:45:03.662660 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d16031-6f28-4798-906d-244f6f0a992e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "84d16031-6f28-4798-906d-244f6f0a992e" (UID: "84d16031-6f28-4798-906d-244f6f0a992e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 00:45:03 crc kubenswrapper[4873]: I0121 00:45:03.666759 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d16031-6f28-4798-906d-244f6f0a992e-kube-api-access-ztxlq" (OuterVolumeSpecName: "kube-api-access-ztxlq") pod "84d16031-6f28-4798-906d-244f6f0a992e" (UID: "84d16031-6f28-4798-906d-244f6f0a992e"). InnerVolumeSpecName "kube-api-access-ztxlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:45:03 crc kubenswrapper[4873]: I0121 00:45:03.759402 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztxlq\" (UniqueName: \"kubernetes.io/projected/84d16031-6f28-4798-906d-244f6f0a992e-kube-api-access-ztxlq\") on node \"crc\" DevicePath \"\"" Jan 21 00:45:03 crc kubenswrapper[4873]: I0121 00:45:03.759460 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84d16031-6f28-4798-906d-244f6f0a992e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:45:03 crc kubenswrapper[4873]: I0121 00:45:03.759481 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84d16031-6f28-4798-906d-244f6f0a992e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:45:04 crc kubenswrapper[4873]: I0121 00:45:04.236787 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" event={"ID":"84d16031-6f28-4798-906d-244f6f0a992e","Type":"ContainerDied","Data":"23f5672b39bf8cd2fa653a03ad6077dd107f9cdaa45e4913673681c772c0fe44"} Jan 21 00:45:04 crc kubenswrapper[4873]: I0121 00:45:04.236837 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23f5672b39bf8cd2fa653a03ad6077dd107f9cdaa45e4913673681c772c0fe44" Jan 21 00:45:04 crc kubenswrapper[4873]: I0121 00:45:04.236898 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-h52xn" Jan 21 00:45:04 crc kubenswrapper[4873]: I0121 00:45:04.298181 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm"] Jan 21 00:45:04 crc kubenswrapper[4873]: I0121 00:45:04.301201 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482560-xgsrm"] Jan 21 00:45:06 crc kubenswrapper[4873]: I0121 00:45:06.076098 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b87797-f91d-4f1a-b0d7-febdafa8e7ba" path="/var/lib/kubelet/pods/20b87797-f91d-4f1a-b0d7-febdafa8e7ba/volumes" Jan 21 00:45:12 crc kubenswrapper[4873]: I0121 00:45:12.502071 4873 scope.go:117] "RemoveContainer" containerID="e280aa140ecccf0a2c84d4e15273f53e55ef395fc3b3c03498cb3dfc9d9bc3e8" Jan 21 00:45:13 crc kubenswrapper[4873]: I0121 00:45:13.064064 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:45:13 crc kubenswrapper[4873]: E0121 00:45:13.064403 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:45:24 crc kubenswrapper[4873]: I0121 00:45:24.063174 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:45:24 crc kubenswrapper[4873]: E0121 00:45:24.063941 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:45:36 crc kubenswrapper[4873]: I0121 00:45:36.063282 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:45:36 crc kubenswrapper[4873]: E0121 00:45:36.063960 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:45:47 crc kubenswrapper[4873]: I0121 00:45:47.063410 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:45:47 crc kubenswrapper[4873]: E0121 00:45:47.064417 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:46:02 crc kubenswrapper[4873]: I0121 00:46:02.070473 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:46:02 crc kubenswrapper[4873]: E0121 00:46:02.071323 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:46:17 crc kubenswrapper[4873]: I0121 00:46:17.064349 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:46:17 crc kubenswrapper[4873]: E0121 00:46:17.067768 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:46:30 crc kubenswrapper[4873]: I0121 00:46:30.063666 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:46:30 crc kubenswrapper[4873]: E0121 00:46:30.065008 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:46:43 crc kubenswrapper[4873]: I0121 00:46:43.064480 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:46:43 crc kubenswrapper[4873]: E0121 00:46:43.065581 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:46:57 crc kubenswrapper[4873]: I0121 00:46:57.063772 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:46:57 crc kubenswrapper[4873]: E0121 00:46:57.064486 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:47:10 crc kubenswrapper[4873]: I0121 00:47:10.064324 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:47:10 crc kubenswrapper[4873]: E0121 00:47:10.065620 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:47:25 crc kubenswrapper[4873]: I0121 00:47:25.063838 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:47:25 crc kubenswrapper[4873]: E0121 00:47:25.064587 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:47:38 crc kubenswrapper[4873]: I0121 00:47:38.073683 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:47:38 crc kubenswrapper[4873]: E0121 00:47:38.075369 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:47:53 crc kubenswrapper[4873]: I0121 00:47:53.064405 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:47:53 crc kubenswrapper[4873]: E0121 00:47:53.065120 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:48:04 crc kubenswrapper[4873]: I0121 00:48:04.063734 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:48:04 crc kubenswrapper[4873]: I0121 00:48:04.586325 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"62b741d22aa060b2e54f207a4df9c172c9010340717a82cbbd53a29d67c4448e"} Jan 21 00:50:31 crc kubenswrapper[4873]: I0121 00:50:31.630083 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:50:31 crc kubenswrapper[4873]: I0121 00:50:31.630762 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:51:01 crc kubenswrapper[4873]: I0121 00:51:01.629821 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:51:01 crc kubenswrapper[4873]: I0121 00:51:01.630588 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:51:31 crc kubenswrapper[4873]: I0121 00:51:31.630532 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:51:31 crc kubenswrapper[4873]: I0121 00:51:31.631007 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:51:31 crc kubenswrapper[4873]: I0121 00:51:31.631048 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:51:31 crc kubenswrapper[4873]: I0121 00:51:31.631439 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62b741d22aa060b2e54f207a4df9c172c9010340717a82cbbd53a29d67c4448e"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:51:31 crc kubenswrapper[4873]: I0121 00:51:31.631485 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://62b741d22aa060b2e54f207a4df9c172c9010340717a82cbbd53a29d67c4448e" gracePeriod=600 Jan 21 00:51:32 crc kubenswrapper[4873]: I0121 00:51:32.243820 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="62b741d22aa060b2e54f207a4df9c172c9010340717a82cbbd53a29d67c4448e" exitCode=0 Jan 21 00:51:32 crc kubenswrapper[4873]: I0121 00:51:32.243880 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"62b741d22aa060b2e54f207a4df9c172c9010340717a82cbbd53a29d67c4448e"} Jan 21 00:51:32 crc kubenswrapper[4873]: I0121 00:51:32.244167 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525"} Jan 21 00:51:32 crc kubenswrapper[4873]: I0121 00:51:32.244191 4873 scope.go:117] "RemoveContainer" containerID="21da3cc1e1cfe68e130c2a7294e7f97e17abe9bf6d211d596f5d85802171b346" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.697244 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tfnqq"] Jan 21 00:53:15 crc kubenswrapper[4873]: E0121 00:53:15.698209 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d16031-6f28-4798-906d-244f6f0a992e" containerName="collect-profiles" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.698227 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d16031-6f28-4798-906d-244f6f0a992e" containerName="collect-profiles" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.698375 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d16031-6f28-4798-906d-244f6f0a992e" containerName="collect-profiles" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.699452 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.719121 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tfnqq"] Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.881160 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-catalog-content\") pod \"redhat-operators-tfnqq\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.881236 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7gzr\" (UniqueName: \"kubernetes.io/projected/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-kube-api-access-j7gzr\") pod \"redhat-operators-tfnqq\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.881286 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-utilities\") pod \"redhat-operators-tfnqq\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.889153 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2sk58"] Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.890490 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.913112 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sk58"] Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.982077 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-catalog-content\") pod \"redhat-operators-tfnqq\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.982133 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7gzr\" (UniqueName: \"kubernetes.io/projected/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-kube-api-access-j7gzr\") pod \"redhat-operators-tfnqq\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.982370 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-utilities\") pod \"redhat-operators-tfnqq\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.982707 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-catalog-content\") pod \"redhat-operators-tfnqq\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:15 crc kubenswrapper[4873]: I0121 00:53:15.982848 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-utilities\") pod \"redhat-operators-tfnqq\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.003997 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7gzr\" (UniqueName: \"kubernetes.io/projected/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-kube-api-access-j7gzr\") pod \"redhat-operators-tfnqq\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.030321 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.084296 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4kjj\" (UniqueName: \"kubernetes.io/projected/824b47c4-7751-49ab-8275-26a8242ee868-kube-api-access-g4kjj\") pod \"community-operators-2sk58\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.084381 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-utilities\") pod \"community-operators-2sk58\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.084419 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-catalog-content\") pod \"community-operators-2sk58\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.185472 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4kjj\" (UniqueName: \"kubernetes.io/projected/824b47c4-7751-49ab-8275-26a8242ee868-kube-api-access-g4kjj\") pod \"community-operators-2sk58\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.186224 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-utilities\") pod \"community-operators-2sk58\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.186281 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-catalog-content\") pod \"community-operators-2sk58\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.189300 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-utilities\") pod \"community-operators-2sk58\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.189613 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-catalog-content\") pod \"community-operators-2sk58\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.228594 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4kjj\" (UniqueName: \"kubernetes.io/projected/824b47c4-7751-49ab-8275-26a8242ee868-kube-api-access-g4kjj\") pod \"community-operators-2sk58\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.273058 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tfnqq"] Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.505123 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.980406 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sk58"] Jan 21 00:53:16 crc kubenswrapper[4873]: W0121 00:53:16.984771 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824b47c4_7751_49ab_8275_26a8242ee868.slice/crio-131840a6402b54e1a9ea8f36065df7e5056f208fca89446b21d422a3075a9c5e WatchSource:0}: Error finding container 131840a6402b54e1a9ea8f36065df7e5056f208fca89446b21d422a3075a9c5e: Status 404 returned error can't find the container with id 131840a6402b54e1a9ea8f36065df7e5056f208fca89446b21d422a3075a9c5e Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.985054 4873 generic.go:334] "Generic (PLEG): container finished" podID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerID="a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5" exitCode=0 Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.985089 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfnqq" event={"ID":"72bf1921-bc8d-4e81-b1b6-f90aeda735f6","Type":"ContainerDied","Data":"a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5"} Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.985112 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfnqq" event={"ID":"72bf1921-bc8d-4e81-b1b6-f90aeda735f6","Type":"ContainerStarted","Data":"5aa81609b3f8902c8542293dd3d7bb21db194383e4dddfc6412295dab2fcd5fb"} Jan 21 00:53:16 crc kubenswrapper[4873]: I0121 00:53:16.986864 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:53:17 crc kubenswrapper[4873]: I0121 00:53:17.996314 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfnqq" event={"ID":"72bf1921-bc8d-4e81-b1b6-f90aeda735f6","Type":"ContainerStarted","Data":"53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1"} Jan 21 00:53:17 crc kubenswrapper[4873]: I0121 00:53:17.999381 4873 generic.go:334] "Generic (PLEG): container finished" podID="824b47c4-7751-49ab-8275-26a8242ee868" containerID="543e3b31fe02ff45a82219aff27d0b89201c407ec4c8db635ef1a0d6cc7e259c" exitCode=0 Jan 21 00:53:17 crc kubenswrapper[4873]: I0121 00:53:17.999429 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sk58" event={"ID":"824b47c4-7751-49ab-8275-26a8242ee868","Type":"ContainerDied","Data":"543e3b31fe02ff45a82219aff27d0b89201c407ec4c8db635ef1a0d6cc7e259c"} Jan 21 00:53:17 crc kubenswrapper[4873]: I0121 00:53:17.999457 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sk58" event={"ID":"824b47c4-7751-49ab-8275-26a8242ee868","Type":"ContainerStarted","Data":"131840a6402b54e1a9ea8f36065df7e5056f208fca89446b21d422a3075a9c5e"} Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.096451 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mlf7f"] Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.097862 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.112256 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mlf7f"] Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.217289 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-utilities\") pod \"certified-operators-mlf7f\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.217451 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfp5b\" (UniqueName: \"kubernetes.io/projected/041f78e7-e7c6-4c82-881c-4f5051e131d0-kube-api-access-tfp5b\") pod \"certified-operators-mlf7f\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.217491 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-catalog-content\") pod \"certified-operators-mlf7f\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.318979 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfp5b\" (UniqueName: \"kubernetes.io/projected/041f78e7-e7c6-4c82-881c-4f5051e131d0-kube-api-access-tfp5b\") pod \"certified-operators-mlf7f\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.319045 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-catalog-content\") pod \"certified-operators-mlf7f\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.319112 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-utilities\") pod \"certified-operators-mlf7f\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.319713 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-utilities\") pod \"certified-operators-mlf7f\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.319801 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-catalog-content\") pod \"certified-operators-mlf7f\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.350535 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfp5b\" (UniqueName: \"kubernetes.io/projected/041f78e7-e7c6-4c82-881c-4f5051e131d0-kube-api-access-tfp5b\") pod \"certified-operators-mlf7f\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.425846 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:18 crc kubenswrapper[4873]: I0121 00:53:18.637318 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mlf7f"] Jan 21 00:53:19 crc kubenswrapper[4873]: I0121 00:53:19.132090 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sk58" event={"ID":"824b47c4-7751-49ab-8275-26a8242ee868","Type":"ContainerStarted","Data":"95b1e9b51c1b5548fc88b1854535b356a5e3754c850c16a8ec7db98584fbbd24"} Jan 21 00:53:19 crc kubenswrapper[4873]: I0121 00:53:19.134354 4873 generic.go:334] "Generic (PLEG): container finished" podID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerID="d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d" exitCode=0 Jan 21 00:53:19 crc kubenswrapper[4873]: I0121 00:53:19.134398 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlf7f" event={"ID":"041f78e7-e7c6-4c82-881c-4f5051e131d0","Type":"ContainerDied","Data":"d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d"} Jan 21 00:53:19 crc kubenswrapper[4873]: I0121 00:53:19.134414 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlf7f" event={"ID":"041f78e7-e7c6-4c82-881c-4f5051e131d0","Type":"ContainerStarted","Data":"8b3c99b1213d2e2934e4563cb3ece43256158fe474726a44d6fbe19c65cf7a58"} Jan 21 00:53:19 crc kubenswrapper[4873]: I0121 00:53:19.137185 4873 generic.go:334] "Generic (PLEG): container finished" podID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerID="53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1" exitCode=0 Jan 21 00:53:19 crc kubenswrapper[4873]: I0121 00:53:19.137207 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfnqq" event={"ID":"72bf1921-bc8d-4e81-b1b6-f90aeda735f6","Type":"ContainerDied","Data":"53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1"} Jan 21 00:53:19 crc kubenswrapper[4873]: E0121 00:53:19.966332 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824b47c4_7751_49ab_8275_26a8242ee868.slice/crio-95b1e9b51c1b5548fc88b1854535b356a5e3754c850c16a8ec7db98584fbbd24.scope\": RecentStats: unable to find data in memory cache]" Jan 21 00:53:20 crc kubenswrapper[4873]: I0121 00:53:20.153141 4873 generic.go:334] "Generic (PLEG): container finished" podID="824b47c4-7751-49ab-8275-26a8242ee868" containerID="95b1e9b51c1b5548fc88b1854535b356a5e3754c850c16a8ec7db98584fbbd24" exitCode=0 Jan 21 00:53:20 crc kubenswrapper[4873]: I0121 00:53:20.153224 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sk58" event={"ID":"824b47c4-7751-49ab-8275-26a8242ee868","Type":"ContainerDied","Data":"95b1e9b51c1b5548fc88b1854535b356a5e3754c850c16a8ec7db98584fbbd24"} Jan 21 00:53:20 crc kubenswrapper[4873]: I0121 00:53:20.158235 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlf7f" event={"ID":"041f78e7-e7c6-4c82-881c-4f5051e131d0","Type":"ContainerStarted","Data":"a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc"} Jan 21 00:53:20 crc kubenswrapper[4873]: I0121 00:53:20.161934 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfnqq" event={"ID":"72bf1921-bc8d-4e81-b1b6-f90aeda735f6","Type":"ContainerStarted","Data":"7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3"} Jan 21 00:53:20 crc kubenswrapper[4873]: I0121 00:53:20.213462 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tfnqq" podStartSLOduration=2.600043341 podStartE2EDuration="5.213443402s" podCreationTimestamp="2026-01-21 00:53:15 +0000 UTC" firstStartedPulling="2026-01-21 00:53:16.98666598 +0000 UTC m=+2829.226533616" lastFinishedPulling="2026-01-21 00:53:19.600066041 +0000 UTC m=+2831.839933677" observedRunningTime="2026-01-21 00:53:20.208352887 +0000 UTC m=+2832.448220553" watchObservedRunningTime="2026-01-21 00:53:20.213443402 +0000 UTC m=+2832.453311058" Jan 21 00:53:21 crc kubenswrapper[4873]: I0121 00:53:21.296186 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sk58" event={"ID":"824b47c4-7751-49ab-8275-26a8242ee868","Type":"ContainerStarted","Data":"44784261e164aa18dd4c78138dd54bc08d2d26ee4e1ce23dc1b7a86ae837f185"} Jan 21 00:53:21 crc kubenswrapper[4873]: I0121 00:53:21.323073 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2sk58" podStartSLOduration=3.719712695 podStartE2EDuration="6.32304502s" podCreationTimestamp="2026-01-21 00:53:15 +0000 UTC" firstStartedPulling="2026-01-21 00:53:18.015431599 +0000 UTC m=+2830.255299285" lastFinishedPulling="2026-01-21 00:53:20.618763964 +0000 UTC m=+2832.858631610" observedRunningTime="2026-01-21 00:53:21.316726312 +0000 UTC m=+2833.556593968" watchObservedRunningTime="2026-01-21 00:53:21.32304502 +0000 UTC m=+2833.562912676" Jan 21 00:53:22 crc kubenswrapper[4873]: I0121 00:53:22.304259 4873 generic.go:334] "Generic (PLEG): container finished" podID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerID="a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc" exitCode=0 Jan 21 00:53:22 crc kubenswrapper[4873]: I0121 00:53:22.304355 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlf7f" event={"ID":"041f78e7-e7c6-4c82-881c-4f5051e131d0","Type":"ContainerDied","Data":"a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc"} Jan 21 00:53:23 crc kubenswrapper[4873]: I0121 00:53:23.311815 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlf7f" event={"ID":"041f78e7-e7c6-4c82-881c-4f5051e131d0","Type":"ContainerStarted","Data":"ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18"} Jan 21 00:53:23 crc kubenswrapper[4873]: I0121 00:53:23.332470 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mlf7f" podStartSLOduration=1.715460528 podStartE2EDuration="5.33245234s" podCreationTimestamp="2026-01-21 00:53:18 +0000 UTC" firstStartedPulling="2026-01-21 00:53:19.13564321 +0000 UTC m=+2831.375510856" lastFinishedPulling="2026-01-21 00:53:22.752634962 +0000 UTC m=+2834.992502668" observedRunningTime="2026-01-21 00:53:23.330582271 +0000 UTC m=+2835.570449947" watchObservedRunningTime="2026-01-21 00:53:23.33245234 +0000 UTC m=+2835.572319986" Jan 21 00:53:26 crc kubenswrapper[4873]: I0121 00:53:26.030870 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:26 crc kubenswrapper[4873]: I0121 00:53:26.031398 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:26 crc kubenswrapper[4873]: I0121 00:53:26.506164 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:26 crc kubenswrapper[4873]: I0121 00:53:26.507529 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:26 crc kubenswrapper[4873]: I0121 00:53:26.584317 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:27 crc kubenswrapper[4873]: I0121 00:53:27.169715 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tfnqq" podUID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerName="registry-server" probeResult="failure" output=< Jan 21 00:53:27 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Jan 21 00:53:27 crc kubenswrapper[4873]: > Jan 21 00:53:27 crc kubenswrapper[4873]: I0121 00:53:27.428351 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:28 crc kubenswrapper[4873]: I0121 00:53:28.278052 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2sk58"] Jan 21 00:53:28 crc kubenswrapper[4873]: I0121 00:53:28.426088 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:28 crc kubenswrapper[4873]: I0121 00:53:28.426968 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:28 crc kubenswrapper[4873]: I0121 00:53:28.494505 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:29 crc kubenswrapper[4873]: I0121 00:53:29.361027 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2sk58" podUID="824b47c4-7751-49ab-8275-26a8242ee868" containerName="registry-server" containerID="cri-o://44784261e164aa18dd4c78138dd54bc08d2d26ee4e1ce23dc1b7a86ae837f185" gracePeriod=2 Jan 21 00:53:29 crc kubenswrapper[4873]: I0121 00:53:29.429617 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:30 crc kubenswrapper[4873]: I0121 00:53:30.873574 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mlf7f"] Jan 21 00:53:31 crc kubenswrapper[4873]: I0121 00:53:31.630995 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:53:31 crc kubenswrapper[4873]: I0121 00:53:31.631827 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:53:32 crc kubenswrapper[4873]: I0121 00:53:32.386503 4873 generic.go:334] "Generic (PLEG): container finished" podID="824b47c4-7751-49ab-8275-26a8242ee868" containerID="44784261e164aa18dd4c78138dd54bc08d2d26ee4e1ce23dc1b7a86ae837f185" exitCode=0 Jan 21 00:53:32 crc kubenswrapper[4873]: I0121 00:53:32.386594 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sk58" event={"ID":"824b47c4-7751-49ab-8275-26a8242ee868","Type":"ContainerDied","Data":"44784261e164aa18dd4c78138dd54bc08d2d26ee4e1ce23dc1b7a86ae837f185"} Jan 21 00:53:32 crc kubenswrapper[4873]: I0121 00:53:32.388027 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mlf7f" podUID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerName="registry-server" containerID="cri-o://ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18" gracePeriod=2 Jan 21 00:53:32 crc kubenswrapper[4873]: I0121 00:53:32.820386 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:32 crc kubenswrapper[4873]: I0121 00:53:32.925951 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-catalog-content\") pod \"041f78e7-e7c6-4c82-881c-4f5051e131d0\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " Jan 21 00:53:32 crc kubenswrapper[4873]: I0121 00:53:32.926003 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-utilities\") pod \"041f78e7-e7c6-4c82-881c-4f5051e131d0\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " Jan 21 00:53:32 crc kubenswrapper[4873]: I0121 00:53:32.926047 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfp5b\" (UniqueName: \"kubernetes.io/projected/041f78e7-e7c6-4c82-881c-4f5051e131d0-kube-api-access-tfp5b\") pod \"041f78e7-e7c6-4c82-881c-4f5051e131d0\" (UID: \"041f78e7-e7c6-4c82-881c-4f5051e131d0\") " Jan 21 00:53:32 crc kubenswrapper[4873]: I0121 00:53:32.927742 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-utilities" (OuterVolumeSpecName: "utilities") pod "041f78e7-e7c6-4c82-881c-4f5051e131d0" (UID: "041f78e7-e7c6-4c82-881c-4f5051e131d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:53:32 crc kubenswrapper[4873]: I0121 00:53:32.932676 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041f78e7-e7c6-4c82-881c-4f5051e131d0-kube-api-access-tfp5b" (OuterVolumeSpecName: "kube-api-access-tfp5b") pod "041f78e7-e7c6-4c82-881c-4f5051e131d0" (UID: "041f78e7-e7c6-4c82-881c-4f5051e131d0"). InnerVolumeSpecName "kube-api-access-tfp5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:53:32 crc kubenswrapper[4873]: I0121 00:53:32.985181 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "041f78e7-e7c6-4c82-881c-4f5051e131d0" (UID: "041f78e7-e7c6-4c82-881c-4f5051e131d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.027965 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.027999 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/041f78e7-e7c6-4c82-881c-4f5051e131d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.028010 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfp5b\" (UniqueName: \"kubernetes.io/projected/041f78e7-e7c6-4c82-881c-4f5051e131d0-kube-api-access-tfp5b\") on node \"crc\" DevicePath \"\"" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.408926 4873 generic.go:334] "Generic (PLEG): container finished" podID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerID="ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18" exitCode=0 Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.408966 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlf7f" event={"ID":"041f78e7-e7c6-4c82-881c-4f5051e131d0","Type":"ContainerDied","Data":"ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18"} Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.408992 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlf7f" event={"ID":"041f78e7-e7c6-4c82-881c-4f5051e131d0","Type":"ContainerDied","Data":"8b3c99b1213d2e2934e4563cb3ece43256158fe474726a44d6fbe19c65cf7a58"} Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.409011 4873 scope.go:117] "RemoveContainer" containerID="ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.409151 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mlf7f" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.436020 4873 scope.go:117] "RemoveContainer" containerID="a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.459840 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mlf7f"] Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.465514 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mlf7f"] Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.485204 4873 scope.go:117] "RemoveContainer" containerID="d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.513069 4873 scope.go:117] "RemoveContainer" containerID="ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18" Jan 21 00:53:33 crc kubenswrapper[4873]: E0121 00:53:33.513381 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18\": container with ID starting with ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18 not found: ID does not exist" containerID="ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.513418 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18"} err="failed to get container status \"ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18\": rpc error: code = NotFound desc = could not find container \"ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18\": container with ID starting with ca84423406a1c6b5471e469c8a352922361d22223485216a3862f9cd44025b18 not found: ID does not exist" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.513445 4873 scope.go:117] "RemoveContainer" containerID="a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc" Jan 21 00:53:33 crc kubenswrapper[4873]: E0121 00:53:33.513697 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc\": container with ID starting with a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc not found: ID does not exist" containerID="a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.513724 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc"} err="failed to get container status \"a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc\": rpc error: code = NotFound desc = could not find container \"a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc\": container with ID starting with a61c9270224d4ebfeeaddefb15a7aa04a3eb2e0714967d7261c2487c01cd0bcc not found: ID does not exist" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.513741 4873 scope.go:117] "RemoveContainer" containerID="d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d" Jan 21 00:53:33 crc kubenswrapper[4873]: E0121 00:53:33.513954 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d\": container with ID starting with d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d not found: ID does not exist" containerID="d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.513980 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d"} err="failed to get container status \"d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d\": rpc error: code = NotFound desc = could not find container \"d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d\": container with ID starting with d41fda81fbf92d03e8e6ddc4f831c1adabe800f1dfbb88a58cecb7fe80ed717d not found: ID does not exist" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.572816 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.735684 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4kjj\" (UniqueName: \"kubernetes.io/projected/824b47c4-7751-49ab-8275-26a8242ee868-kube-api-access-g4kjj\") pod \"824b47c4-7751-49ab-8275-26a8242ee868\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.736538 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-catalog-content\") pod \"824b47c4-7751-49ab-8275-26a8242ee868\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.736611 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-utilities\") pod \"824b47c4-7751-49ab-8275-26a8242ee868\" (UID: \"824b47c4-7751-49ab-8275-26a8242ee868\") " Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.737531 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-utilities" (OuterVolumeSpecName: "utilities") pod "824b47c4-7751-49ab-8275-26a8242ee868" (UID: "824b47c4-7751-49ab-8275-26a8242ee868"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.740982 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/824b47c4-7751-49ab-8275-26a8242ee868-kube-api-access-g4kjj" (OuterVolumeSpecName: "kube-api-access-g4kjj") pod "824b47c4-7751-49ab-8275-26a8242ee868" (UID: "824b47c4-7751-49ab-8275-26a8242ee868"). InnerVolumeSpecName "kube-api-access-g4kjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.810327 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "824b47c4-7751-49ab-8275-26a8242ee868" (UID: "824b47c4-7751-49ab-8275-26a8242ee868"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.838119 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4kjj\" (UniqueName: \"kubernetes.io/projected/824b47c4-7751-49ab-8275-26a8242ee868-kube-api-access-g4kjj\") on node \"crc\" DevicePath \"\"" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.838167 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:53:33 crc kubenswrapper[4873]: I0121 00:53:33.838182 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824b47c4-7751-49ab-8275-26a8242ee868-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:53:34 crc kubenswrapper[4873]: I0121 00:53:34.080767 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041f78e7-e7c6-4c82-881c-4f5051e131d0" path="/var/lib/kubelet/pods/041f78e7-e7c6-4c82-881c-4f5051e131d0/volumes" Jan 21 00:53:34 crc kubenswrapper[4873]: I0121 00:53:34.421923 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sk58" Jan 21 00:53:34 crc kubenswrapper[4873]: I0121 00:53:34.421816 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sk58" event={"ID":"824b47c4-7751-49ab-8275-26a8242ee868","Type":"ContainerDied","Data":"131840a6402b54e1a9ea8f36065df7e5056f208fca89446b21d422a3075a9c5e"} Jan 21 00:53:34 crc kubenswrapper[4873]: I0121 00:53:34.424179 4873 scope.go:117] "RemoveContainer" containerID="44784261e164aa18dd4c78138dd54bc08d2d26ee4e1ce23dc1b7a86ae837f185" Jan 21 00:53:34 crc kubenswrapper[4873]: I0121 00:53:34.453041 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2sk58"] Jan 21 00:53:34 crc kubenswrapper[4873]: I0121 00:53:34.458756 4873 scope.go:117] "RemoveContainer" containerID="95b1e9b51c1b5548fc88b1854535b356a5e3754c850c16a8ec7db98584fbbd24" Jan 21 00:53:34 crc kubenswrapper[4873]: I0121 00:53:34.464144 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2sk58"] Jan 21 00:53:34 crc kubenswrapper[4873]: I0121 00:53:34.485287 4873 scope.go:117] "RemoveContainer" containerID="543e3b31fe02ff45a82219aff27d0b89201c407ec4c8db635ef1a0d6cc7e259c" Jan 21 00:53:36 crc kubenswrapper[4873]: I0121 00:53:36.071677 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="824b47c4-7751-49ab-8275-26a8242ee868" path="/var/lib/kubelet/pods/824b47c4-7751-49ab-8275-26a8242ee868/volumes" Jan 21 00:53:36 crc kubenswrapper[4873]: I0121 00:53:36.093251 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:36 crc kubenswrapper[4873]: I0121 00:53:36.141309 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.088778 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tfnqq"] Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.089693 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tfnqq" podUID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerName="registry-server" containerID="cri-o://7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3" gracePeriod=2 Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.445171 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.450889 4873 generic.go:334] "Generic (PLEG): container finished" podID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerID="7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3" exitCode=0 Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.450911 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfnqq" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.450926 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfnqq" event={"ID":"72bf1921-bc8d-4e81-b1b6-f90aeda735f6","Type":"ContainerDied","Data":"7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3"} Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.450955 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfnqq" event={"ID":"72bf1921-bc8d-4e81-b1b6-f90aeda735f6","Type":"ContainerDied","Data":"5aa81609b3f8902c8542293dd3d7bb21db194383e4dddfc6412295dab2fcd5fb"} Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.450973 4873 scope.go:117] "RemoveContainer" containerID="7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.466447 4873 scope.go:117] "RemoveContainer" containerID="53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.504510 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-utilities\") pod \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.504608 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7gzr\" (UniqueName: \"kubernetes.io/projected/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-kube-api-access-j7gzr\") pod \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.504709 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-catalog-content\") pod \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\" (UID: \"72bf1921-bc8d-4e81-b1b6-f90aeda735f6\") " Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.509180 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-utilities" (OuterVolumeSpecName: "utilities") pod "72bf1921-bc8d-4e81-b1b6-f90aeda735f6" (UID: "72bf1921-bc8d-4e81-b1b6-f90aeda735f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.515152 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.520230 4873 scope.go:117] "RemoveContainer" containerID="a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.520223 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-kube-api-access-j7gzr" (OuterVolumeSpecName: "kube-api-access-j7gzr") pod "72bf1921-bc8d-4e81-b1b6-f90aeda735f6" (UID: "72bf1921-bc8d-4e81-b1b6-f90aeda735f6"). InnerVolumeSpecName "kube-api-access-j7gzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.562352 4873 scope.go:117] "RemoveContainer" containerID="7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3" Jan 21 00:53:38 crc kubenswrapper[4873]: E0121 00:53:38.562807 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3\": container with ID starting with 7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3 not found: ID does not exist" containerID="7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.562853 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3"} err="failed to get container status \"7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3\": rpc error: code = NotFound desc = could not find container \"7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3\": container with ID starting with 7d129d5ecc4dc96261b9176b8c17d06facc0c711006c114f3b020cb3250c5aa3 not found: ID does not exist" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.562879 4873 scope.go:117] "RemoveContainer" containerID="53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1" Jan 21 00:53:38 crc kubenswrapper[4873]: E0121 00:53:38.563163 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1\": container with ID starting with 53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1 not found: ID does not exist" containerID="53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.563190 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1"} err="failed to get container status \"53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1\": rpc error: code = NotFound desc = could not find container \"53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1\": container with ID starting with 53536b8595f91aad3d6fff3c05560dd82bc29cbcc31ddadde27a980c45c244c1 not found: ID does not exist" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.563210 4873 scope.go:117] "RemoveContainer" containerID="a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5" Jan 21 00:53:38 crc kubenswrapper[4873]: E0121 00:53:38.563420 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5\": container with ID starting with a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5 not found: ID does not exist" containerID="a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.563446 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5"} err="failed to get container status \"a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5\": rpc error: code = NotFound desc = could not find container \"a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5\": container with ID starting with a4421ab2e492dc280634b1efa881477729f0b7d040d82051986b27058be0bad5 not found: ID does not exist" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.616762 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7gzr\" (UniqueName: \"kubernetes.io/projected/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-kube-api-access-j7gzr\") on node \"crc\" DevicePath \"\"" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.628273 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72bf1921-bc8d-4e81-b1b6-f90aeda735f6" (UID: "72bf1921-bc8d-4e81-b1b6-f90aeda735f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.718054 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72bf1921-bc8d-4e81-b1b6-f90aeda735f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.783689 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tfnqq"] Jan 21 00:53:38 crc kubenswrapper[4873]: I0121 00:53:38.786991 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tfnqq"] Jan 21 00:53:40 crc kubenswrapper[4873]: I0121 00:53:40.073138 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" path="/var/lib/kubelet/pods/72bf1921-bc8d-4e81-b1b6-f90aeda735f6/volumes" Jan 21 00:54:01 crc kubenswrapper[4873]: I0121 00:54:01.630177 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:54:01 crc kubenswrapper[4873]: I0121 00:54:01.630702 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:54:31 crc kubenswrapper[4873]: I0121 00:54:31.630482 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:54:31 crc kubenswrapper[4873]: I0121 00:54:31.632257 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:54:31 crc kubenswrapper[4873]: I0121 00:54:31.632354 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 00:54:31 crc kubenswrapper[4873]: I0121 00:54:31.633449 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:54:31 crc kubenswrapper[4873]: I0121 00:54:31.633523 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" gracePeriod=600 Jan 21 00:54:31 crc kubenswrapper[4873]: E0121 00:54:31.755545 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:54:31 crc kubenswrapper[4873]: I0121 00:54:31.870644 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" exitCode=0 Jan 21 00:54:31 crc kubenswrapper[4873]: I0121 00:54:31.870761 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525"} Jan 21 00:54:31 crc kubenswrapper[4873]: I0121 00:54:31.871297 4873 scope.go:117] "RemoveContainer" containerID="62b741d22aa060b2e54f207a4df9c172c9010340717a82cbbd53a29d67c4448e" Jan 21 00:54:31 crc kubenswrapper[4873]: I0121 00:54:31.872801 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:54:31 crc kubenswrapper[4873]: E0121 00:54:31.873782 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:54:43 crc kubenswrapper[4873]: I0121 00:54:43.065880 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:54:43 crc kubenswrapper[4873]: E0121 00:54:43.067242 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:54:55 crc kubenswrapper[4873]: I0121 00:54:55.063135 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:54:55 crc kubenswrapper[4873]: E0121 00:54:55.064299 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:55:07 crc kubenswrapper[4873]: I0121 00:55:07.064456 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:55:07 crc kubenswrapper[4873]: E0121 00:55:07.065417 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:55:19 crc kubenswrapper[4873]: I0121 00:55:19.064254 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:55:19 crc kubenswrapper[4873]: E0121 00:55:19.065463 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:55:33 crc kubenswrapper[4873]: I0121 00:55:33.066236 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:55:33 crc kubenswrapper[4873]: E0121 00:55:33.067069 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:55:45 crc kubenswrapper[4873]: I0121 00:55:45.064793 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:55:45 crc kubenswrapper[4873]: E0121 00:55:45.065902 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:56:00 crc kubenswrapper[4873]: I0121 00:56:00.064210 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:56:00 crc kubenswrapper[4873]: E0121 00:56:00.065214 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:56:15 crc kubenswrapper[4873]: I0121 00:56:15.063856 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:56:15 crc kubenswrapper[4873]: E0121 00:56:15.065227 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:56:28 crc kubenswrapper[4873]: I0121 00:56:28.072542 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:56:28 crc kubenswrapper[4873]: E0121 00:56:28.075514 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:56:42 crc kubenswrapper[4873]: I0121 00:56:42.068516 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:56:42 crc kubenswrapper[4873]: E0121 00:56:42.072525 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:56:55 crc kubenswrapper[4873]: I0121 00:56:55.063792 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:56:55 crc kubenswrapper[4873]: E0121 00:56:55.064605 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:57:09 crc kubenswrapper[4873]: I0121 00:57:09.063421 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:57:09 crc kubenswrapper[4873]: E0121 00:57:09.066348 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:57:23 crc kubenswrapper[4873]: I0121 00:57:23.064151 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:57:23 crc kubenswrapper[4873]: E0121 00:57:23.064919 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:57:38 crc kubenswrapper[4873]: I0121 00:57:38.070462 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:57:38 crc kubenswrapper[4873]: E0121 00:57:38.071883 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:57:50 crc kubenswrapper[4873]: I0121 00:57:50.063910 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:57:50 crc kubenswrapper[4873]: E0121 00:57:50.064588 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:58:05 crc kubenswrapper[4873]: I0121 00:58:05.063432 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:58:05 crc kubenswrapper[4873]: E0121 00:58:05.064249 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:58:18 crc kubenswrapper[4873]: I0121 00:58:18.070181 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:58:18 crc kubenswrapper[4873]: E0121 00:58:18.071676 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:58:29 crc kubenswrapper[4873]: I0121 00:58:29.064089 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:58:29 crc kubenswrapper[4873]: E0121 00:58:29.064900 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:58:42 crc kubenswrapper[4873]: I0121 00:58:42.067765 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:58:42 crc kubenswrapper[4873]: E0121 00:58:42.068623 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:58:55 crc kubenswrapper[4873]: I0121 00:58:55.064252 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:58:55 crc kubenswrapper[4873]: E0121 00:58:55.065222 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:59:06 crc kubenswrapper[4873]: I0121 00:59:06.071008 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:59:06 crc kubenswrapper[4873]: E0121 00:59:06.072123 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:59:18 crc kubenswrapper[4873]: I0121 00:59:18.081416 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:59:18 crc kubenswrapper[4873]: E0121 00:59:18.082405 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 00:59:32 crc kubenswrapper[4873]: I0121 00:59:32.064576 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 00:59:33 crc kubenswrapper[4873]: I0121 00:59:33.152845 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"16d1582f112391828b9b6ad0ed777805be7db23759406ff0899cae7e5ac28706"} Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.160785 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw"] Jan 21 01:00:00 crc kubenswrapper[4873]: E0121 01:00:00.161780 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824b47c4-7751-49ab-8275-26a8242ee868" containerName="extract-utilities" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.161802 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="824b47c4-7751-49ab-8275-26a8242ee868" containerName="extract-utilities" Jan 21 01:00:00 crc kubenswrapper[4873]: E0121 01:00:00.161818 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.161827 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[4873]: E0121 01:00:00.161841 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.161850 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[4873]: E0121 01:00:00.161868 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerName="extract-content" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.161877 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerName="extract-content" Jan 21 01:00:00 crc kubenswrapper[4873]: E0121 01:00:00.161894 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824b47c4-7751-49ab-8275-26a8242ee868" containerName="extract-content" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.161905 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="824b47c4-7751-49ab-8275-26a8242ee868" containerName="extract-content" Jan 21 01:00:00 crc kubenswrapper[4873]: E0121 01:00:00.161923 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerName="extract-utilities" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.161933 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerName="extract-utilities" Jan 21 01:00:00 crc kubenswrapper[4873]: E0121 01:00:00.161948 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerName="extract-content" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.161958 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerName="extract-content" Jan 21 01:00:00 crc kubenswrapper[4873]: E0121 01:00:00.161973 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerName="extract-utilities" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.161983 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerName="extract-utilities" Jan 21 01:00:00 crc kubenswrapper[4873]: E0121 01:00:00.162003 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824b47c4-7751-49ab-8275-26a8242ee868" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.162013 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="824b47c4-7751-49ab-8275-26a8242ee868" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.162162 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="824b47c4-7751-49ab-8275-26a8242ee868" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.162184 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="041f78e7-e7c6-4c82-881c-4f5051e131d0" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.162200 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="72bf1921-bc8d-4e81-b1b6-f90aeda735f6" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.162802 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.167876 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.168061 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.173141 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw"] Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.289146 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-secret-volume\") pod \"collect-profiles-29482620-8xgsw\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.289229 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctvh\" (UniqueName: \"kubernetes.io/projected/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-kube-api-access-bctvh\") pod \"collect-profiles-29482620-8xgsw\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.289292 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-config-volume\") pod \"collect-profiles-29482620-8xgsw\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.389980 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-secret-volume\") pod \"collect-profiles-29482620-8xgsw\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.390061 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bctvh\" (UniqueName: \"kubernetes.io/projected/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-kube-api-access-bctvh\") pod \"collect-profiles-29482620-8xgsw\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.390085 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-config-volume\") pod \"collect-profiles-29482620-8xgsw\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.391181 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-config-volume\") pod \"collect-profiles-29482620-8xgsw\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.407102 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bctvh\" (UniqueName: \"kubernetes.io/projected/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-kube-api-access-bctvh\") pod \"collect-profiles-29482620-8xgsw\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.411156 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-secret-volume\") pod \"collect-profiles-29482620-8xgsw\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.488217 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:00 crc kubenswrapper[4873]: I0121 01:00:00.709788 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw"] Jan 21 01:00:00 crc kubenswrapper[4873]: W0121 01:00:00.721343 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68a09f98_a8f3_4d43_b662_0f6dd43e5d4d.slice/crio-8aebfea6793e6ce2fd60e41de518113959f5a3440b37b77ae515e84d5a6c4dcc WatchSource:0}: Error finding container 8aebfea6793e6ce2fd60e41de518113959f5a3440b37b77ae515e84d5a6c4dcc: Status 404 returned error can't find the container with id 8aebfea6793e6ce2fd60e41de518113959f5a3440b37b77ae515e84d5a6c4dcc Jan 21 01:00:01 crc kubenswrapper[4873]: I0121 01:00:01.358336 4873 generic.go:334] "Generic (PLEG): container finished" podID="68a09f98-a8f3-4d43-b662-0f6dd43e5d4d" containerID="ed53a18d9f3e449b988a2ba9143d528cef64fd41b4228d643cb37bf40104a5be" exitCode=0 Jan 21 01:00:01 crc kubenswrapper[4873]: I0121 01:00:01.358537 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" event={"ID":"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d","Type":"ContainerDied","Data":"ed53a18d9f3e449b988a2ba9143d528cef64fd41b4228d643cb37bf40104a5be"} Jan 21 01:00:01 crc kubenswrapper[4873]: I0121 01:00:01.358586 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" event={"ID":"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d","Type":"ContainerStarted","Data":"8aebfea6793e6ce2fd60e41de518113959f5a3440b37b77ae515e84d5a6c4dcc"} Jan 21 01:00:02 crc kubenswrapper[4873]: I0121 01:00:02.689598 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:02 crc kubenswrapper[4873]: I0121 01:00:02.822260 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bctvh\" (UniqueName: \"kubernetes.io/projected/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-kube-api-access-bctvh\") pod \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " Jan 21 01:00:02 crc kubenswrapper[4873]: I0121 01:00:02.822350 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-secret-volume\") pod \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " Jan 21 01:00:02 crc kubenswrapper[4873]: I0121 01:00:02.822407 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-config-volume\") pod \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\" (UID: \"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d\") " Jan 21 01:00:02 crc kubenswrapper[4873]: I0121 01:00:02.823136 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-config-volume" (OuterVolumeSpecName: "config-volume") pod "68a09f98-a8f3-4d43-b662-0f6dd43e5d4d" (UID: "68a09f98-a8f3-4d43-b662-0f6dd43e5d4d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 01:00:02 crc kubenswrapper[4873]: I0121 01:00:02.828742 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "68a09f98-a8f3-4d43-b662-0f6dd43e5d4d" (UID: "68a09f98-a8f3-4d43-b662-0f6dd43e5d4d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 01:00:02 crc kubenswrapper[4873]: I0121 01:00:02.829709 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-kube-api-access-bctvh" (OuterVolumeSpecName: "kube-api-access-bctvh") pod "68a09f98-a8f3-4d43-b662-0f6dd43e5d4d" (UID: "68a09f98-a8f3-4d43-b662-0f6dd43e5d4d"). InnerVolumeSpecName "kube-api-access-bctvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 01:00:02 crc kubenswrapper[4873]: I0121 01:00:02.924517 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bctvh\" (UniqueName: \"kubernetes.io/projected/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-kube-api-access-bctvh\") on node \"crc\" DevicePath \"\"" Jan 21 01:00:02 crc kubenswrapper[4873]: I0121 01:00:02.924589 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 01:00:02 crc kubenswrapper[4873]: I0121 01:00:02.924611 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68a09f98-a8f3-4d43-b662-0f6dd43e5d4d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 01:00:03 crc kubenswrapper[4873]: I0121 01:00:03.371615 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" event={"ID":"68a09f98-a8f3-4d43-b662-0f6dd43e5d4d","Type":"ContainerDied","Data":"8aebfea6793e6ce2fd60e41de518113959f5a3440b37b77ae515e84d5a6c4dcc"} Jan 21 01:00:03 crc kubenswrapper[4873]: I0121 01:00:03.371682 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aebfea6793e6ce2fd60e41de518113959f5a3440b37b77ae515e84d5a6c4dcc" Jan 21 01:00:03 crc kubenswrapper[4873]: I0121 01:00:03.371690 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-8xgsw" Jan 21 01:00:03 crc kubenswrapper[4873]: I0121 01:00:03.787592 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt"] Jan 21 01:00:03 crc kubenswrapper[4873]: I0121 01:00:03.797128 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482575-d8wvt"] Jan 21 01:00:04 crc kubenswrapper[4873]: I0121 01:00:04.075686 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466dd605-5713-4b28-8bf5-8d1198cc6689" path="/var/lib/kubelet/pods/466dd605-5713-4b28-8bf5-8d1198cc6689/volumes" Jan 21 01:00:12 crc kubenswrapper[4873]: I0121 01:00:12.855662 4873 scope.go:117] "RemoveContainer" containerID="83ee6a225fc4e5c7808b8a68c9b9758fe7364817f7985a310f31fd7859f3edfe" Jan 21 01:02:01 crc kubenswrapper[4873]: I0121 01:02:01.630764 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:02:01 crc kubenswrapper[4873]: I0121 01:02:01.631754 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:02:31 crc kubenswrapper[4873]: I0121 01:02:31.630727 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:02:31 crc kubenswrapper[4873]: I0121 01:02:31.631364 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:03:01 crc kubenswrapper[4873]: I0121 01:03:01.630831 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:03:01 crc kubenswrapper[4873]: I0121 01:03:01.631385 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:03:01 crc kubenswrapper[4873]: I0121 01:03:01.631439 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 01:03:01 crc kubenswrapper[4873]: I0121 01:03:01.632356 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16d1582f112391828b9b6ad0ed777805be7db23759406ff0899cae7e5ac28706"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 01:03:01 crc kubenswrapper[4873]: I0121 01:03:01.632598 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://16d1582f112391828b9b6ad0ed777805be7db23759406ff0899cae7e5ac28706" gracePeriod=600 Jan 21 01:03:02 crc kubenswrapper[4873]: I0121 01:03:02.706015 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="16d1582f112391828b9b6ad0ed777805be7db23759406ff0899cae7e5ac28706" exitCode=0 Jan 21 01:03:02 crc kubenswrapper[4873]: I0121 01:03:02.706110 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"16d1582f112391828b9b6ad0ed777805be7db23759406ff0899cae7e5ac28706"} Jan 21 01:03:02 crc kubenswrapper[4873]: I0121 01:03:02.706372 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerStarted","Data":"4f5fad28b8fece331c432af0aeb89d86d50a61daa46aafce05ab4ca7fe22dde6"} Jan 21 01:03:02 crc kubenswrapper[4873]: I0121 01:03:02.706398 4873 scope.go:117] "RemoveContainer" containerID="9f8a97e3bbdd549e2abeb2f0d08ec804ab9e6c0fde87d0223e6c6b2e69d45525" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.351536 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4rrdf"] Jan 21 01:03:36 crc kubenswrapper[4873]: E0121 01:03:36.352350 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a09f98-a8f3-4d43-b662-0f6dd43e5d4d" containerName="collect-profiles" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.352364 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a09f98-a8f3-4d43-b662-0f6dd43e5d4d" containerName="collect-profiles" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.352498 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a09f98-a8f3-4d43-b662-0f6dd43e5d4d" containerName="collect-profiles" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.353521 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.371508 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4rrdf"] Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.407770 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmpv5\" (UniqueName: \"kubernetes.io/projected/15c2c585-c271-4d16-8fcd-42b73ea51913-kube-api-access-cmpv5\") pod \"community-operators-4rrdf\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.407866 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-utilities\") pod \"community-operators-4rrdf\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.407946 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-catalog-content\") pod \"community-operators-4rrdf\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.509191 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-utilities\") pod \"community-operators-4rrdf\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.509621 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-catalog-content\") pod \"community-operators-4rrdf\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.509878 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmpv5\" (UniqueName: \"kubernetes.io/projected/15c2c585-c271-4d16-8fcd-42b73ea51913-kube-api-access-cmpv5\") pod \"community-operators-4rrdf\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.509992 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-utilities\") pod \"community-operators-4rrdf\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.510435 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-catalog-content\") pod \"community-operators-4rrdf\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.564197 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmpv5\" (UniqueName: \"kubernetes.io/projected/15c2c585-c271-4d16-8fcd-42b73ea51913-kube-api-access-cmpv5\") pod \"community-operators-4rrdf\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:36 crc kubenswrapper[4873]: I0121 01:03:36.682185 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:37 crc kubenswrapper[4873]: I0121 01:03:37.118284 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4rrdf"] Jan 21 01:03:37 crc kubenswrapper[4873]: W0121 01:03:37.125184 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15c2c585_c271_4d16_8fcd_42b73ea51913.slice/crio-1fea952b4f19e591bb261eeb7661260abdaa17c6f48f35cb709f89de00daedef WatchSource:0}: Error finding container 1fea952b4f19e591bb261eeb7661260abdaa17c6f48f35cb709f89de00daedef: Status 404 returned error can't find the container with id 1fea952b4f19e591bb261eeb7661260abdaa17c6f48f35cb709f89de00daedef Jan 21 01:03:38 crc kubenswrapper[4873]: I0121 01:03:38.004045 4873 generic.go:334] "Generic (PLEG): container finished" podID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerID="0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce" exitCode=0 Jan 21 01:03:38 crc kubenswrapper[4873]: I0121 01:03:38.004178 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rrdf" event={"ID":"15c2c585-c271-4d16-8fcd-42b73ea51913","Type":"ContainerDied","Data":"0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce"} Jan 21 01:03:38 crc kubenswrapper[4873]: I0121 01:03:38.004419 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rrdf" event={"ID":"15c2c585-c271-4d16-8fcd-42b73ea51913","Type":"ContainerStarted","Data":"1fea952b4f19e591bb261eeb7661260abdaa17c6f48f35cb709f89de00daedef"} Jan 21 01:03:38 crc kubenswrapper[4873]: I0121 01:03:38.006526 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 01:03:39 crc kubenswrapper[4873]: I0121 01:03:39.014839 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rrdf" event={"ID":"15c2c585-c271-4d16-8fcd-42b73ea51913","Type":"ContainerStarted","Data":"0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2"} Jan 21 01:03:40 crc kubenswrapper[4873]: I0121 01:03:40.022469 4873 generic.go:334] "Generic (PLEG): container finished" podID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerID="0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2" exitCode=0 Jan 21 01:03:40 crc kubenswrapper[4873]: I0121 01:03:40.022524 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rrdf" event={"ID":"15c2c585-c271-4d16-8fcd-42b73ea51913","Type":"ContainerDied","Data":"0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2"} Jan 21 01:03:41 crc kubenswrapper[4873]: I0121 01:03:41.033209 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rrdf" event={"ID":"15c2c585-c271-4d16-8fcd-42b73ea51913","Type":"ContainerStarted","Data":"287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494"} Jan 21 01:03:41 crc kubenswrapper[4873]: I0121 01:03:41.063673 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4rrdf" podStartSLOduration=2.647442004 podStartE2EDuration="5.063653089s" podCreationTimestamp="2026-01-21 01:03:36 +0000 UTC" firstStartedPulling="2026-01-21 01:03:38.006271464 +0000 UTC m=+3450.246139110" lastFinishedPulling="2026-01-21 01:03:40.422482549 +0000 UTC m=+3452.662350195" observedRunningTime="2026-01-21 01:03:41.06029726 +0000 UTC m=+3453.300164926" watchObservedRunningTime="2026-01-21 01:03:41.063653089 +0000 UTC m=+3453.303520755" Jan 21 01:03:46 crc kubenswrapper[4873]: I0121 01:03:46.682487 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:46 crc kubenswrapper[4873]: I0121 01:03:46.683004 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:46 crc kubenswrapper[4873]: I0121 01:03:46.738970 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:47 crc kubenswrapper[4873]: I0121 01:03:47.122998 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:47 crc kubenswrapper[4873]: I0121 01:03:47.173804 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4rrdf"] Jan 21 01:03:49 crc kubenswrapper[4873]: I0121 01:03:49.083812 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4rrdf" podUID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerName="registry-server" containerID="cri-o://287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494" gracePeriod=2 Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.023023 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.094209 4873 generic.go:334] "Generic (PLEG): container finished" podID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerID="287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494" exitCode=0 Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.094247 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rrdf" event={"ID":"15c2c585-c271-4d16-8fcd-42b73ea51913","Type":"ContainerDied","Data":"287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494"} Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.094273 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rrdf" event={"ID":"15c2c585-c271-4d16-8fcd-42b73ea51913","Type":"ContainerDied","Data":"1fea952b4f19e591bb261eeb7661260abdaa17c6f48f35cb709f89de00daedef"} Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.094290 4873 scope.go:117] "RemoveContainer" containerID="287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.094396 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rrdf" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.105375 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-catalog-content\") pod \"15c2c585-c271-4d16-8fcd-42b73ea51913\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.105477 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmpv5\" (UniqueName: \"kubernetes.io/projected/15c2c585-c271-4d16-8fcd-42b73ea51913-kube-api-access-cmpv5\") pod \"15c2c585-c271-4d16-8fcd-42b73ea51913\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.105626 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-utilities\") pod \"15c2c585-c271-4d16-8fcd-42b73ea51913\" (UID: \"15c2c585-c271-4d16-8fcd-42b73ea51913\") " Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.106519 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-utilities" (OuterVolumeSpecName: "utilities") pod "15c2c585-c271-4d16-8fcd-42b73ea51913" (UID: "15c2c585-c271-4d16-8fcd-42b73ea51913"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.111962 4873 scope.go:117] "RemoveContainer" containerID="0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.126358 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c2c585-c271-4d16-8fcd-42b73ea51913-kube-api-access-cmpv5" (OuterVolumeSpecName: "kube-api-access-cmpv5") pod "15c2c585-c271-4d16-8fcd-42b73ea51913" (UID: "15c2c585-c271-4d16-8fcd-42b73ea51913"). InnerVolumeSpecName "kube-api-access-cmpv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.139627 4873 scope.go:117] "RemoveContainer" containerID="0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.154284 4873 scope.go:117] "RemoveContainer" containerID="287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494" Jan 21 01:03:50 crc kubenswrapper[4873]: E0121 01:03:50.154608 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494\": container with ID starting with 287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494 not found: ID does not exist" containerID="287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.154642 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494"} err="failed to get container status \"287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494\": rpc error: code = NotFound desc = could not find container \"287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494\": container with ID starting with 287a0638e866bc382b795575012ed8dcb657aefba888af0c6221a8c3866da494 not found: ID does not exist" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.154663 4873 scope.go:117] "RemoveContainer" containerID="0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2" Jan 21 01:03:50 crc kubenswrapper[4873]: E0121 01:03:50.154940 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2\": container with ID starting with 0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2 not found: ID does not exist" containerID="0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.154993 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2"} err="failed to get container status \"0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2\": rpc error: code = NotFound desc = could not find container \"0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2\": container with ID starting with 0ae2a602429361080e5c87e14a9392f97b62b1aa6de4bb3b47edb1bbc41ae7c2 not found: ID does not exist" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.155026 4873 scope.go:117] "RemoveContainer" containerID="0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce" Jan 21 01:03:50 crc kubenswrapper[4873]: E0121 01:03:50.155307 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce\": container with ID starting with 0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce not found: ID does not exist" containerID="0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.155335 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce"} err="failed to get container status \"0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce\": rpc error: code = NotFound desc = could not find container \"0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce\": container with ID starting with 0cc9ea8029d69a4342b1d87465d102fc22d114c7c4ffc9cdbb2c23849307b6ce not found: ID does not exist" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.159564 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15c2c585-c271-4d16-8fcd-42b73ea51913" (UID: "15c2c585-c271-4d16-8fcd-42b73ea51913"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.207666 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.207705 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15c2c585-c271-4d16-8fcd-42b73ea51913-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.207718 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmpv5\" (UniqueName: \"kubernetes.io/projected/15c2c585-c271-4d16-8fcd-42b73ea51913-kube-api-access-cmpv5\") on node \"crc\" DevicePath \"\"" Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.430978 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4rrdf"] Jan 21 01:03:50 crc kubenswrapper[4873]: I0121 01:03:50.437365 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4rrdf"] Jan 21 01:03:52 crc kubenswrapper[4873]: I0121 01:03:52.083156 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c2c585-c271-4d16-8fcd-42b73ea51913" path="/var/lib/kubelet/pods/15c2c585-c271-4d16-8fcd-42b73ea51913/volumes" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.140881 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-57b65"] Jan 21 01:04:24 crc kubenswrapper[4873]: E0121 01:04:24.142601 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerName="extract-content" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.142655 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerName="extract-content" Jan 21 01:04:24 crc kubenswrapper[4873]: E0121 01:04:24.142680 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerName="extract-utilities" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.142702 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerName="extract-utilities" Jan 21 01:04:24 crc kubenswrapper[4873]: E0121 01:04:24.142724 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerName="registry-server" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.142739 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerName="registry-server" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.143012 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c2c585-c271-4d16-8fcd-42b73ea51913" containerName="registry-server" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.146965 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.163836 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57b65"] Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.300195 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpt2\" (UniqueName: \"kubernetes.io/projected/a88dab15-f42b-469a-83f4-45e300c3c660-kube-api-access-sjpt2\") pod \"redhat-operators-57b65\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.300391 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-catalog-content\") pod \"redhat-operators-57b65\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.300425 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-utilities\") pod \"redhat-operators-57b65\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.401468 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-catalog-content\") pod \"redhat-operators-57b65\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.401753 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-utilities\") pod \"redhat-operators-57b65\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.401794 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjpt2\" (UniqueName: \"kubernetes.io/projected/a88dab15-f42b-469a-83f4-45e300c3c660-kube-api-access-sjpt2\") pod \"redhat-operators-57b65\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.402278 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-utilities\") pod \"redhat-operators-57b65\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.402310 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-catalog-content\") pod \"redhat-operators-57b65\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.424841 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjpt2\" (UniqueName: \"kubernetes.io/projected/a88dab15-f42b-469a-83f4-45e300c3c660-kube-api-access-sjpt2\") pod \"redhat-operators-57b65\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.490640 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:24 crc kubenswrapper[4873]: I0121 01:04:24.749067 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57b65"] Jan 21 01:04:25 crc kubenswrapper[4873]: I0121 01:04:25.397393 4873 generic.go:334] "Generic (PLEG): container finished" podID="a88dab15-f42b-469a-83f4-45e300c3c660" containerID="06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2" exitCode=0 Jan 21 01:04:25 crc kubenswrapper[4873]: I0121 01:04:25.397439 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57b65" event={"ID":"a88dab15-f42b-469a-83f4-45e300c3c660","Type":"ContainerDied","Data":"06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2"} Jan 21 01:04:25 crc kubenswrapper[4873]: I0121 01:04:25.397463 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57b65" event={"ID":"a88dab15-f42b-469a-83f4-45e300c3c660","Type":"ContainerStarted","Data":"3e2f7ab1dcff3cea0fb82b4f93a034a9b1ba07c87311ce270af28d2cd85ddeb8"} Jan 21 01:04:27 crc kubenswrapper[4873]: I0121 01:04:27.414524 4873 generic.go:334] "Generic (PLEG): container finished" podID="a88dab15-f42b-469a-83f4-45e300c3c660" containerID="6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364" exitCode=0 Jan 21 01:04:27 crc kubenswrapper[4873]: I0121 01:04:27.414728 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57b65" event={"ID":"a88dab15-f42b-469a-83f4-45e300c3c660","Type":"ContainerDied","Data":"6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364"} Jan 21 01:04:28 crc kubenswrapper[4873]: I0121 01:04:28.427796 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57b65" event={"ID":"a88dab15-f42b-469a-83f4-45e300c3c660","Type":"ContainerStarted","Data":"d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf"} Jan 21 01:04:28 crc kubenswrapper[4873]: I0121 01:04:28.469105 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-57b65" podStartSLOduration=1.99144096 podStartE2EDuration="4.469083152s" podCreationTimestamp="2026-01-21 01:04:24 +0000 UTC" firstStartedPulling="2026-01-21 01:04:25.398784401 +0000 UTC m=+3497.638652037" lastFinishedPulling="2026-01-21 01:04:27.876426543 +0000 UTC m=+3500.116294229" observedRunningTime="2026-01-21 01:04:28.460947115 +0000 UTC m=+3500.700814801" watchObservedRunningTime="2026-01-21 01:04:28.469083152 +0000 UTC m=+3500.708950808" Jan 21 01:04:34 crc kubenswrapper[4873]: I0121 01:04:34.490855 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:34 crc kubenswrapper[4873]: I0121 01:04:34.491609 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:35 crc kubenswrapper[4873]: I0121 01:04:35.568604 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57b65" podUID="a88dab15-f42b-469a-83f4-45e300c3c660" containerName="registry-server" probeResult="failure" output=< Jan 21 01:04:35 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Jan 21 01:04:35 crc kubenswrapper[4873]: > Jan 21 01:04:44 crc kubenswrapper[4873]: I0121 01:04:44.547642 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:44 crc kubenswrapper[4873]: I0121 01:04:44.596579 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:44 crc kubenswrapper[4873]: I0121 01:04:44.789314 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57b65"] Jan 21 01:04:45 crc kubenswrapper[4873]: I0121 01:04:45.569358 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-57b65" podUID="a88dab15-f42b-469a-83f4-45e300c3c660" containerName="registry-server" containerID="cri-o://d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf" gracePeriod=2 Jan 21 01:04:45 crc kubenswrapper[4873]: I0121 01:04:45.957640 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.128032 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-catalog-content\") pod \"a88dab15-f42b-469a-83f4-45e300c3c660\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.128131 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjpt2\" (UniqueName: \"kubernetes.io/projected/a88dab15-f42b-469a-83f4-45e300c3c660-kube-api-access-sjpt2\") pod \"a88dab15-f42b-469a-83f4-45e300c3c660\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.128191 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-utilities\") pod \"a88dab15-f42b-469a-83f4-45e300c3c660\" (UID: \"a88dab15-f42b-469a-83f4-45e300c3c660\") " Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.130214 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-utilities" (OuterVolumeSpecName: "utilities") pod "a88dab15-f42b-469a-83f4-45e300c3c660" (UID: "a88dab15-f42b-469a-83f4-45e300c3c660"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.149453 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88dab15-f42b-469a-83f4-45e300c3c660-kube-api-access-sjpt2" (OuterVolumeSpecName: "kube-api-access-sjpt2") pod "a88dab15-f42b-469a-83f4-45e300c3c660" (UID: "a88dab15-f42b-469a-83f4-45e300c3c660"). InnerVolumeSpecName "kube-api-access-sjpt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.230478 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjpt2\" (UniqueName: \"kubernetes.io/projected/a88dab15-f42b-469a-83f4-45e300c3c660-kube-api-access-sjpt2\") on node \"crc\" DevicePath \"\"" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.231674 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.260718 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a88dab15-f42b-469a-83f4-45e300c3c660" (UID: "a88dab15-f42b-469a-83f4-45e300c3c660"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.333436 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a88dab15-f42b-469a-83f4-45e300c3c660-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.578661 4873 generic.go:334] "Generic (PLEG): container finished" podID="a88dab15-f42b-469a-83f4-45e300c3c660" containerID="d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf" exitCode=0 Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.578773 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57b65" event={"ID":"a88dab15-f42b-469a-83f4-45e300c3c660","Type":"ContainerDied","Data":"d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf"} Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.579014 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57b65" event={"ID":"a88dab15-f42b-469a-83f4-45e300c3c660","Type":"ContainerDied","Data":"3e2f7ab1dcff3cea0fb82b4f93a034a9b1ba07c87311ce270af28d2cd85ddeb8"} Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.579037 4873 scope.go:117] "RemoveContainer" containerID="d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.578803 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57b65" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.608744 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57b65"] Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.615463 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-57b65"] Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.627818 4873 scope.go:117] "RemoveContainer" containerID="6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.647770 4873 scope.go:117] "RemoveContainer" containerID="06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.678021 4873 scope.go:117] "RemoveContainer" containerID="d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf" Jan 21 01:04:46 crc kubenswrapper[4873]: E0121 01:04:46.678433 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf\": container with ID starting with d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf not found: ID does not exist" containerID="d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.678483 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf"} err="failed to get container status \"d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf\": rpc error: code = NotFound desc = could not find container \"d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf\": container with ID starting with d48cdc002ad20bbf82b26da32e629da1553197fa0b2122097b267890d1e932cf not found: ID does not exist" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.678517 4873 scope.go:117] "RemoveContainer" containerID="6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364" Jan 21 01:04:46 crc kubenswrapper[4873]: E0121 01:04:46.678878 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364\": container with ID starting with 6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364 not found: ID does not exist" containerID="6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.678909 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364"} err="failed to get container status \"6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364\": rpc error: code = NotFound desc = could not find container \"6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364\": container with ID starting with 6af1ef4b9ec5236b9c0aaac590a90816c2f76fff95f2fcfd5af03e4544d76364 not found: ID does not exist" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.678943 4873 scope.go:117] "RemoveContainer" containerID="06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2" Jan 21 01:04:46 crc kubenswrapper[4873]: E0121 01:04:46.679294 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2\": container with ID starting with 06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2 not found: ID does not exist" containerID="06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2" Jan 21 01:04:46 crc kubenswrapper[4873]: I0121 01:04:46.679327 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2"} err="failed to get container status \"06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2\": rpc error: code = NotFound desc = could not find container \"06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2\": container with ID starting with 06e8403427de2707f2c8c4018482ddd8e07e63abb37cdf90bde8429638f758b2 not found: ID does not exist" Jan 21 01:04:48 crc kubenswrapper[4873]: I0121 01:04:48.076819 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a88dab15-f42b-469a-83f4-45e300c3c660" path="/var/lib/kubelet/pods/a88dab15-f42b-469a-83f4-45e300c3c660/volumes" Jan 21 01:05:01 crc kubenswrapper[4873]: I0121 01:05:01.630824 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:05:01 crc kubenswrapper[4873]: I0121 01:05:01.631589 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.159073 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t5lnr"] Jan 21 01:05:16 crc kubenswrapper[4873]: E0121 01:05:16.160261 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88dab15-f42b-469a-83f4-45e300c3c660" containerName="extract-utilities" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.160296 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88dab15-f42b-469a-83f4-45e300c3c660" containerName="extract-utilities" Jan 21 01:05:16 crc kubenswrapper[4873]: E0121 01:05:16.160323 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88dab15-f42b-469a-83f4-45e300c3c660" containerName="extract-content" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.160340 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88dab15-f42b-469a-83f4-45e300c3c660" containerName="extract-content" Jan 21 01:05:16 crc kubenswrapper[4873]: E0121 01:05:16.160366 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88dab15-f42b-469a-83f4-45e300c3c660" containerName="registry-server" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.160382 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88dab15-f42b-469a-83f4-45e300c3c660" containerName="registry-server" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.160713 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88dab15-f42b-469a-83f4-45e300c3c660" containerName="registry-server" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.162631 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.177661 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5lnr"] Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.361128 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-utilities\") pod \"certified-operators-t5lnr\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.361591 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-catalog-content\") pod \"certified-operators-t5lnr\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.361713 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b2kz\" (UniqueName: \"kubernetes.io/projected/157158ba-7d45-47a7-9142-1dd3dce0d276-kube-api-access-7b2kz\") pod \"certified-operators-t5lnr\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.463351 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-catalog-content\") pod \"certified-operators-t5lnr\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.463396 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b2kz\" (UniqueName: \"kubernetes.io/projected/157158ba-7d45-47a7-9142-1dd3dce0d276-kube-api-access-7b2kz\") pod \"certified-operators-t5lnr\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.463464 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-utilities\") pod \"certified-operators-t5lnr\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.463926 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-utilities\") pod \"certified-operators-t5lnr\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.464135 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-catalog-content\") pod \"certified-operators-t5lnr\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.488506 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b2kz\" (UniqueName: \"kubernetes.io/projected/157158ba-7d45-47a7-9142-1dd3dce0d276-kube-api-access-7b2kz\") pod \"certified-operators-t5lnr\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.503466 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.751441 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5lnr"] Jan 21 01:05:16 crc kubenswrapper[4873]: I0121 01:05:16.824807 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5lnr" event={"ID":"157158ba-7d45-47a7-9142-1dd3dce0d276","Type":"ContainerStarted","Data":"efa698c8c0c0665d48d57380b3f186fbb8de1b026ee56e3d5cc2e924f37c289a"} Jan 21 01:05:17 crc kubenswrapper[4873]: I0121 01:05:17.845078 4873 generic.go:334] "Generic (PLEG): container finished" podID="157158ba-7d45-47a7-9142-1dd3dce0d276" containerID="36a4c9108ce5538bb7a636e26309a56dd823a56e342a26711d1b1bbc37ef1ff5" exitCode=0 Jan 21 01:05:17 crc kubenswrapper[4873]: I0121 01:05:17.845180 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5lnr" event={"ID":"157158ba-7d45-47a7-9142-1dd3dce0d276","Type":"ContainerDied","Data":"36a4c9108ce5538bb7a636e26309a56dd823a56e342a26711d1b1bbc37ef1ff5"} Jan 21 01:05:18 crc kubenswrapper[4873]: I0121 01:05:18.855084 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5lnr" event={"ID":"157158ba-7d45-47a7-9142-1dd3dce0d276","Type":"ContainerStarted","Data":"e3d8697cf7a13df96ade70d800fedfd40d2eb0a7296519aeaede1b6009ef92e0"} Jan 21 01:05:19 crc kubenswrapper[4873]: I0121 01:05:19.868522 4873 generic.go:334] "Generic (PLEG): container finished" podID="157158ba-7d45-47a7-9142-1dd3dce0d276" containerID="e3d8697cf7a13df96ade70d800fedfd40d2eb0a7296519aeaede1b6009ef92e0" exitCode=0 Jan 21 01:05:19 crc kubenswrapper[4873]: I0121 01:05:19.868637 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5lnr" event={"ID":"157158ba-7d45-47a7-9142-1dd3dce0d276","Type":"ContainerDied","Data":"e3d8697cf7a13df96ade70d800fedfd40d2eb0a7296519aeaede1b6009ef92e0"} Jan 21 01:05:20 crc kubenswrapper[4873]: I0121 01:05:20.875784 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5lnr" event={"ID":"157158ba-7d45-47a7-9142-1dd3dce0d276","Type":"ContainerStarted","Data":"f97dbcb50002104193807f6e1e7fb37fea4849cfaf125ee896d5fd4314ea50ae"} Jan 21 01:05:26 crc kubenswrapper[4873]: I0121 01:05:26.504895 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:26 crc kubenswrapper[4873]: I0121 01:05:26.505328 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:26 crc kubenswrapper[4873]: I0121 01:05:26.566601 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:26 crc kubenswrapper[4873]: I0121 01:05:26.608496 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t5lnr" podStartSLOduration=8.109010318 podStartE2EDuration="10.607657741s" podCreationTimestamp="2026-01-21 01:05:16 +0000 UTC" firstStartedPulling="2026-01-21 01:05:17.847669029 +0000 UTC m=+3550.087536715" lastFinishedPulling="2026-01-21 01:05:20.346316472 +0000 UTC m=+3552.586184138" observedRunningTime="2026-01-21 01:05:20.914754784 +0000 UTC m=+3553.154622470" watchObservedRunningTime="2026-01-21 01:05:26.607657741 +0000 UTC m=+3558.847525417" Jan 21 01:05:26 crc kubenswrapper[4873]: I0121 01:05:26.997512 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:27 crc kubenswrapper[4873]: I0121 01:05:27.070686 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5lnr"] Jan 21 01:05:28 crc kubenswrapper[4873]: I0121 01:05:28.948613 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t5lnr" podUID="157158ba-7d45-47a7-9142-1dd3dce0d276" containerName="registry-server" containerID="cri-o://f97dbcb50002104193807f6e1e7fb37fea4849cfaf125ee896d5fd4314ea50ae" gracePeriod=2 Jan 21 01:05:29 crc kubenswrapper[4873]: I0121 01:05:29.956765 4873 generic.go:334] "Generic (PLEG): container finished" podID="157158ba-7d45-47a7-9142-1dd3dce0d276" containerID="f97dbcb50002104193807f6e1e7fb37fea4849cfaf125ee896d5fd4314ea50ae" exitCode=0 Jan 21 01:05:29 crc kubenswrapper[4873]: I0121 01:05:29.956800 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5lnr" event={"ID":"157158ba-7d45-47a7-9142-1dd3dce0d276","Type":"ContainerDied","Data":"f97dbcb50002104193807f6e1e7fb37fea4849cfaf125ee896d5fd4314ea50ae"} Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.071600 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.252050 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-utilities\") pod \"157158ba-7d45-47a7-9142-1dd3dce0d276\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.252153 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-catalog-content\") pod \"157158ba-7d45-47a7-9142-1dd3dce0d276\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.252210 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b2kz\" (UniqueName: \"kubernetes.io/projected/157158ba-7d45-47a7-9142-1dd3dce0d276-kube-api-access-7b2kz\") pod \"157158ba-7d45-47a7-9142-1dd3dce0d276\" (UID: \"157158ba-7d45-47a7-9142-1dd3dce0d276\") " Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.253655 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-utilities" (OuterVolumeSpecName: "utilities") pod "157158ba-7d45-47a7-9142-1dd3dce0d276" (UID: "157158ba-7d45-47a7-9142-1dd3dce0d276"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.257204 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157158ba-7d45-47a7-9142-1dd3dce0d276-kube-api-access-7b2kz" (OuterVolumeSpecName: "kube-api-access-7b2kz") pod "157158ba-7d45-47a7-9142-1dd3dce0d276" (UID: "157158ba-7d45-47a7-9142-1dd3dce0d276"). InnerVolumeSpecName "kube-api-access-7b2kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.312966 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "157158ba-7d45-47a7-9142-1dd3dce0d276" (UID: "157158ba-7d45-47a7-9142-1dd3dce0d276"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.353541 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b2kz\" (UniqueName: \"kubernetes.io/projected/157158ba-7d45-47a7-9142-1dd3dce0d276-kube-api-access-7b2kz\") on node \"crc\" DevicePath \"\"" Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.353589 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.353600 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/157158ba-7d45-47a7-9142-1dd3dce0d276-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.966235 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5lnr" event={"ID":"157158ba-7d45-47a7-9142-1dd3dce0d276","Type":"ContainerDied","Data":"efa698c8c0c0665d48d57380b3f186fbb8de1b026ee56e3d5cc2e924f37c289a"} Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.966641 4873 scope.go:117] "RemoveContainer" containerID="f97dbcb50002104193807f6e1e7fb37fea4849cfaf125ee896d5fd4314ea50ae" Jan 21 01:05:30 crc kubenswrapper[4873]: I0121 01:05:30.966812 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5lnr" Jan 21 01:05:31 crc kubenswrapper[4873]: I0121 01:05:31.006078 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5lnr"] Jan 21 01:05:31 crc kubenswrapper[4873]: I0121 01:05:31.009428 4873 scope.go:117] "RemoveContainer" containerID="e3d8697cf7a13df96ade70d800fedfd40d2eb0a7296519aeaede1b6009ef92e0" Jan 21 01:05:31 crc kubenswrapper[4873]: I0121 01:05:31.013626 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t5lnr"] Jan 21 01:05:31 crc kubenswrapper[4873]: I0121 01:05:31.026173 4873 scope.go:117] "RemoveContainer" containerID="36a4c9108ce5538bb7a636e26309a56dd823a56e342a26711d1b1bbc37ef1ff5" Jan 21 01:05:31 crc kubenswrapper[4873]: I0121 01:05:31.630202 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:05:31 crc kubenswrapper[4873]: I0121 01:05:31.630746 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:05:32 crc kubenswrapper[4873]: I0121 01:05:32.072479 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="157158ba-7d45-47a7-9142-1dd3dce0d276" path="/var/lib/kubelet/pods/157158ba-7d45-47a7-9142-1dd3dce0d276/volumes" Jan 21 01:06:01 crc kubenswrapper[4873]: I0121 01:06:01.631078 4873 patch_prober.go:28] interesting pod/machine-config-daemon-ppcbs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:06:01 crc kubenswrapper[4873]: I0121 01:06:01.631706 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:06:01 crc kubenswrapper[4873]: I0121 01:06:01.631767 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" Jan 21 01:06:01 crc kubenswrapper[4873]: I0121 01:06:01.632489 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f5fad28b8fece331c432af0aeb89d86d50a61daa46aafce05ab4ca7fe22dde6"} pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 01:06:01 crc kubenswrapper[4873]: I0121 01:06:01.632610 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerName="machine-config-daemon" containerID="cri-o://4f5fad28b8fece331c432af0aeb89d86d50a61daa46aafce05ab4ca7fe22dde6" gracePeriod=600 Jan 21 01:06:01 crc kubenswrapper[4873]: E0121 01:06:01.768777 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" Jan 21 01:06:02 crc kubenswrapper[4873]: I0121 01:06:02.225979 4873 generic.go:334] "Generic (PLEG): container finished" podID="a9fc5804-a6f3-4b7e-b115-68275cb68417" containerID="4f5fad28b8fece331c432af0aeb89d86d50a61daa46aafce05ab4ca7fe22dde6" exitCode=0 Jan 21 01:06:02 crc kubenswrapper[4873]: I0121 01:06:02.226038 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" event={"ID":"a9fc5804-a6f3-4b7e-b115-68275cb68417","Type":"ContainerDied","Data":"4f5fad28b8fece331c432af0aeb89d86d50a61daa46aafce05ab4ca7fe22dde6"} Jan 21 01:06:02 crc kubenswrapper[4873]: I0121 01:06:02.226083 4873 scope.go:117] "RemoveContainer" containerID="16d1582f112391828b9b6ad0ed777805be7db23759406ff0899cae7e5ac28706" Jan 21 01:06:02 crc kubenswrapper[4873]: I0121 01:06:02.226808 4873 scope.go:117] "RemoveContainer" containerID="4f5fad28b8fece331c432af0aeb89d86d50a61daa46aafce05ab4ca7fe22dde6" Jan 21 01:06:02 crc kubenswrapper[4873]: E0121 01:06:02.227248 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppcbs_openshift-machine-config-operator(a9fc5804-a6f3-4b7e-b115-68275cb68417)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppcbs" podUID="a9fc5804-a6f3-4b7e-b115-68275cb68417" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134023207024442 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134023207017357 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134013615016504 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134013616015455 5ustar corecore